Feb 24 14:44:49 localhost kernel: Linux version 5.14.0-681.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Wed Feb 11 20:19:22 UTC 2026
Feb 24 14:44:49 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 24 14:44:49 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64 root=UUID=9d578f93-c4e9-4172-8459-ef150e54751c ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 24 14:44:49 localhost kernel: BIOS-provided physical RAM map:
Feb 24 14:44:49 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 24 14:44:49 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 24 14:44:49 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 24 14:44:49 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 24 14:44:49 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 24 14:44:49 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 24 14:44:49 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 24 14:44:49 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 24 14:44:49 localhost kernel: NX (Execute Disable) protection: active
Feb 24 14:44:49 localhost kernel: APIC: Static calls initialized
Feb 24 14:44:49 localhost kernel: SMBIOS 2.8 present.
Feb 24 14:44:49 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 24 14:44:49 localhost kernel: Hypervisor detected: KVM
Feb 24 14:44:49 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 24 14:44:49 localhost kernel: kvm-clock: using sched offset of 9319326973 cycles
Feb 24 14:44:49 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 24 14:44:49 localhost kernel: tsc: Detected 2799.998 MHz processor
Feb 24 14:44:49 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 24 14:44:49 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 24 14:44:49 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 24 14:44:49 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 24 14:44:49 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 24 14:44:49 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 24 14:44:49 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 24 14:44:49 localhost kernel: Using GB pages for direct mapping
Feb 24 14:44:49 localhost kernel: RAMDISK: [mem 0x1b6f6000-0x29b72fff]
Feb 24 14:44:49 localhost kernel: ACPI: Early table checksum verification disabled
Feb 24 14:44:49 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 24 14:44:49 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 24 14:44:49 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 24 14:44:49 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 24 14:44:49 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 24 14:44:49 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 24 14:44:49 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 24 14:44:49 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 24 14:44:49 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 24 14:44:49 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 24 14:44:49 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 24 14:44:49 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 24 14:44:49 localhost kernel: No NUMA configuration found
Feb 24 14:44:49 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 24 14:44:49 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Feb 24 14:44:49 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 24 14:44:49 localhost kernel: Zone ranges:
Feb 24 14:44:49 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 24 14:44:49 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 24 14:44:49 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 24 14:44:49 localhost kernel:   Device   empty
Feb 24 14:44:49 localhost kernel: Movable zone start for each node
Feb 24 14:44:49 localhost kernel: Early memory node ranges
Feb 24 14:44:49 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 24 14:44:49 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 24 14:44:49 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 24 14:44:49 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 24 14:44:49 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 24 14:44:49 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 24 14:44:49 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 24 14:44:49 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Feb 24 14:44:49 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 24 14:44:49 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 24 14:44:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 24 14:44:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 24 14:44:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 24 14:44:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 24 14:44:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 24 14:44:49 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 24 14:44:49 localhost kernel: TSC deadline timer available
Feb 24 14:44:49 localhost kernel: CPU topo: Max. logical packages:   8
Feb 24 14:44:49 localhost kernel: CPU topo: Max. logical dies:       8
Feb 24 14:44:49 localhost kernel: CPU topo: Max. dies per package:   1
Feb 24 14:44:49 localhost kernel: CPU topo: Max. threads per core:   1
Feb 24 14:44:49 localhost kernel: CPU topo: Num. cores per package:     1
Feb 24 14:44:49 localhost kernel: CPU topo: Num. threads per package:   1
Feb 24 14:44:49 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 24 14:44:49 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 24 14:44:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 24 14:44:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 24 14:44:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 24 14:44:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 24 14:44:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 24 14:44:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 24 14:44:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 24 14:44:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 24 14:44:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 24 14:44:49 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 24 14:44:49 localhost kernel: Booting paravirtualized kernel on KVM
Feb 24 14:44:49 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 24 14:44:49 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 24 14:44:49 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 24 14:44:49 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Feb 24 14:44:49 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Feb 24 14:44:49 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 24 14:44:49 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64 root=UUID=9d578f93-c4e9-4172-8459-ef150e54751c ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 24 14:44:49 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64", will be passed to user space.
Feb 24 14:44:49 localhost kernel: random: crng init done
Feb 24 14:44:49 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 24 14:44:49 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 24 14:44:49 localhost kernel: Fallback order for Node 0: 0 
Feb 24 14:44:49 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 24 14:44:49 localhost kernel: Policy zone: Normal
Feb 24 14:44:49 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 24 14:44:49 localhost kernel: software IO TLB: area num 8.
Feb 24 14:44:49 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 24 14:44:49 localhost kernel: ftrace: allocating 49565 entries in 194 pages
Feb 24 14:44:49 localhost kernel: ftrace: allocated 194 pages with 3 groups
Feb 24 14:44:49 localhost kernel: Dynamic Preempt: voluntary
Feb 24 14:44:49 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 24 14:44:49 localhost kernel: rcu:         RCU event tracing is enabled.
Feb 24 14:44:49 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 24 14:44:49 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Feb 24 14:44:49 localhost kernel:         Rude variant of Tasks RCU enabled.
Feb 24 14:44:49 localhost kernel:         Tracing variant of Tasks RCU enabled.
Feb 24 14:44:49 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 24 14:44:49 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 24 14:44:49 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 24 14:44:49 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 24 14:44:49 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 24 14:44:49 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 24 14:44:49 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 24 14:44:49 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 24 14:44:49 localhost kernel: Console: colour VGA+ 80x25
Feb 24 14:44:49 localhost kernel: printk: console [ttyS0] enabled
Feb 24 14:44:49 localhost kernel: ACPI: Core revision 20230331
Feb 24 14:44:49 localhost kernel: APIC: Switch to symmetric I/O mode setup
Feb 24 14:44:49 localhost kernel: x2apic enabled
Feb 24 14:44:49 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Feb 24 14:44:49 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 24 14:44:49 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Feb 24 14:44:49 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 24 14:44:49 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 24 14:44:49 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 24 14:44:49 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 24 14:44:49 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 24 14:44:49 localhost kernel: Spectre V2 : Mitigation: Retpolines
Feb 24 14:44:49 localhost kernel: RETBleed: Mitigation: untrained return thunk
Feb 24 14:44:49 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 24 14:44:49 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 24 14:44:49 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 24 14:44:49 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 24 14:44:49 localhost kernel: active return thunk: retbleed_return_thunk
Feb 24 14:44:49 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 24 14:44:49 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 24 14:44:49 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 24 14:44:49 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 24 14:44:49 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 24 14:44:49 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 24 14:44:49 localhost kernel: Freeing SMP alternatives memory: 40K
Feb 24 14:44:49 localhost kernel: pid_max: default: 32768 minimum: 301
Feb 24 14:44:49 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 24 14:44:49 localhost kernel: landlock: Up and running.
Feb 24 14:44:49 localhost kernel: Yama: becoming mindful.
Feb 24 14:44:49 localhost kernel: SELinux:  Initializing.
Feb 24 14:44:49 localhost kernel: LSM support for eBPF active
Feb 24 14:44:49 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 24 14:44:49 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 24 14:44:49 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 24 14:44:49 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 24 14:44:49 localhost kernel: ... version:                0
Feb 24 14:44:49 localhost kernel: ... bit width:              48
Feb 24 14:44:49 localhost kernel: ... generic registers:      6
Feb 24 14:44:49 localhost kernel: ... value mask:             0000ffffffffffff
Feb 24 14:44:49 localhost kernel: ... max period:             00007fffffffffff
Feb 24 14:44:49 localhost kernel: ... fixed-purpose events:   0
Feb 24 14:44:49 localhost kernel: ... event mask:             000000000000003f
Feb 24 14:44:49 localhost kernel: signal: max sigframe size: 1776
Feb 24 14:44:49 localhost kernel: rcu: Hierarchical SRCU implementation.
Feb 24 14:44:49 localhost kernel: rcu:         Max phase no-delay instances is 400.
Feb 24 14:44:49 localhost kernel: smp: Bringing up secondary CPUs ...
Feb 24 14:44:49 localhost kernel: smpboot: x86: Booting SMP configuration:
Feb 24 14:44:49 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 24 14:44:49 localhost kernel: smp: Brought up 1 node, 8 CPUs
Feb 24 14:44:49 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Feb 24 14:44:49 localhost kernel: node 0 deferred pages initialised in 9ms
Feb 24 14:44:49 localhost kernel: Memory: 7617752K/8388068K available (16384K kernel code, 5795K rwdata, 13948K rodata, 4204K init, 7180K bss, 764384K reserved, 0K cma-reserved)
Feb 24 14:44:49 localhost kernel: devtmpfs: initialized
Feb 24 14:44:49 localhost kernel: x86/mm: Memory block size: 128MB
Feb 24 14:44:49 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 24 14:44:49 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 24 14:44:49 localhost kernel: pinctrl core: initialized pinctrl subsystem
Feb 24 14:44:49 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 24 14:44:49 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 24 14:44:49 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 24 14:44:49 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 24 14:44:49 localhost kernel: audit: initializing netlink subsys (disabled)
Feb 24 14:44:49 localhost kernel: audit: type=2000 audit(1771944287.270:1): state=initialized audit_enabled=0 res=1
Feb 24 14:44:49 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 24 14:44:49 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 24 14:44:49 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 24 14:44:49 localhost kernel: cpuidle: using governor menu
Feb 24 14:44:49 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 24 14:44:49 localhost kernel: PCI: Using configuration type 1 for base access
Feb 24 14:44:49 localhost kernel: PCI: Using configuration type 1 for extended access
Feb 24 14:44:49 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 24 14:44:49 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 24 14:44:49 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 24 14:44:49 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 24 14:44:49 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 24 14:44:49 localhost kernel: Demotion targets for Node 0: null
Feb 24 14:44:49 localhost kernel: cryptd: max_cpu_qlen set to 1000
Feb 24 14:44:49 localhost kernel: ACPI: Added _OSI(Module Device)
Feb 24 14:44:49 localhost kernel: ACPI: Added _OSI(Processor Device)
Feb 24 14:44:49 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 24 14:44:49 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 24 14:44:49 localhost kernel: ACPI: Interpreter enabled
Feb 24 14:44:49 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 24 14:44:49 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Feb 24 14:44:49 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 24 14:44:49 localhost kernel: PCI: Using E820 reservations for host bridge windows
Feb 24 14:44:49 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 24 14:44:49 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 24 14:44:49 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [3] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [4] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [5] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [6] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [7] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [8] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [9] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [10] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [11] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [12] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [13] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [14] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [15] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [16] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [17] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [18] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [19] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [20] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [21] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [22] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [23] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [24] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [25] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [26] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [27] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [28] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [29] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [30] registered
Feb 24 14:44:49 localhost kernel: acpiphp: Slot [31] registered
Feb 24 14:44:49 localhost kernel: PCI host bridge to bus 0000:00
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 24 14:44:49 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 24 14:44:49 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 24 14:44:49 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 24 14:44:49 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 24 14:44:49 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 24 14:44:49 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 24 14:44:49 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 24 14:44:49 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 24 14:44:49 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 24 14:44:49 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 24 14:44:49 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 24 14:44:49 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 24 14:44:49 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 24 14:44:49 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 24 14:44:49 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 24 14:44:49 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 24 14:44:49 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 24 14:44:49 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 24 14:44:49 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 24 14:44:49 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 24 14:44:49 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 24 14:44:49 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 24 14:44:49 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 24 14:44:49 localhost kernel: iommu: Default domain type: Translated
Feb 24 14:44:49 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 24 14:44:49 localhost kernel: SCSI subsystem initialized
Feb 24 14:44:49 localhost kernel: ACPI: bus type USB registered
Feb 24 14:44:49 localhost kernel: usbcore: registered new interface driver usbfs
Feb 24 14:44:49 localhost kernel: usbcore: registered new interface driver hub
Feb 24 14:44:49 localhost kernel: usbcore: registered new device driver usb
Feb 24 14:44:49 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 24 14:44:49 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 24 14:44:49 localhost kernel: PTP clock support registered
Feb 24 14:44:49 localhost kernel: EDAC MC: Ver: 3.0.0
Feb 24 14:44:49 localhost kernel: NetLabel: Initializing
Feb 24 14:44:49 localhost kernel: NetLabel:  domain hash size = 128
Feb 24 14:44:49 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 24 14:44:49 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Feb 24 14:44:49 localhost kernel: PCI: Using ACPI for IRQ routing
Feb 24 14:44:49 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 24 14:44:49 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 24 14:44:49 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Feb 24 14:44:49 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 24 14:44:49 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 24 14:44:49 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 24 14:44:49 localhost kernel: vgaarb: loaded
Feb 24 14:44:49 localhost kernel: clocksource: Switched to clocksource kvm-clock
Feb 24 14:44:49 localhost kernel: VFS: Disk quotas dquot_6.6.0
Feb 24 14:44:49 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 24 14:44:49 localhost kernel: pnp: PnP ACPI init
Feb 24 14:44:49 localhost kernel: pnp 00:03: [dma 2]
Feb 24 14:44:49 localhost kernel: pnp: PnP ACPI: found 5 devices
Feb 24 14:44:49 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 24 14:44:49 localhost kernel: NET: Registered PF_INET protocol family
Feb 24 14:44:49 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 24 14:44:49 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 24 14:44:49 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 24 14:44:49 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 24 14:44:49 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 24 14:44:49 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 24 14:44:49 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 24 14:44:49 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 24 14:44:49 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 24 14:44:49 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 24 14:44:49 localhost kernel: NET: Registered PF_XDP protocol family
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 24 14:44:49 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 24 14:44:49 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 24 14:44:49 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 24 14:44:49 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 21524 usecs
Feb 24 14:44:49 localhost kernel: PCI: CLS 0 bytes, default 64
Feb 24 14:44:49 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 24 14:44:49 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 24 14:44:49 localhost kernel: ACPI: bus type thunderbolt registered
Feb 24 14:44:49 localhost kernel: Trying to unpack rootfs image as initramfs...
Feb 24 14:44:49 localhost kernel: Initialise system trusted keyrings
Feb 24 14:44:49 localhost kernel: Key type blacklist registered
Feb 24 14:44:49 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 24 14:44:49 localhost kernel: zbud: loaded
Feb 24 14:44:49 localhost kernel: integrity: Platform Keyring initialized
Feb 24 14:44:49 localhost kernel: integrity: Machine keyring initialized
Feb 24 14:44:49 localhost kernel: Freeing initrd memory: 233972K
Feb 24 14:44:49 localhost kernel: NET: Registered PF_ALG protocol family
Feb 24 14:44:49 localhost kernel: xor: automatically using best checksumming function   avx       
Feb 24 14:44:49 localhost kernel: Key type asymmetric registered
Feb 24 14:44:49 localhost kernel: Asymmetric key parser 'x509' registered
Feb 24 14:44:49 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 24 14:44:49 localhost kernel: io scheduler mq-deadline registered
Feb 24 14:44:49 localhost kernel: io scheduler kyber registered
Feb 24 14:44:49 localhost kernel: io scheduler bfq registered
Feb 24 14:44:49 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 24 14:44:49 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 24 14:44:49 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 24 14:44:49 localhost kernel: ACPI: button: Power Button [PWRF]
Feb 24 14:44:49 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 24 14:44:49 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 24 14:44:49 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 24 14:44:49 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 24 14:44:49 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 24 14:44:49 localhost kernel: Non-volatile memory driver v1.3
Feb 24 14:44:49 localhost kernel: rdac: device handler registered
Feb 24 14:44:49 localhost kernel: hp_sw: device handler registered
Feb 24 14:44:49 localhost kernel: emc: device handler registered
Feb 24 14:44:49 localhost kernel: alua: device handler registered
Feb 24 14:44:49 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 24 14:44:49 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 24 14:44:49 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 24 14:44:49 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 24 14:44:49 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 24 14:44:49 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 24 14:44:49 localhost kernel: usb usb1: Product: UHCI Host Controller
Feb 24 14:44:49 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-681.el9.x86_64 uhci_hcd
Feb 24 14:44:49 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 24 14:44:49 localhost kernel: hub 1-0:1.0: USB hub found
Feb 24 14:44:49 localhost kernel: hub 1-0:1.0: 2 ports detected
Feb 24 14:44:49 localhost kernel: usbcore: registered new interface driver usbserial_generic
Feb 24 14:44:49 localhost kernel: usbserial: USB Serial support registered for generic
Feb 24 14:44:49 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 24 14:44:49 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 24 14:44:49 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 24 14:44:49 localhost kernel: mousedev: PS/2 mouse device common for all mice
Feb 24 14:44:49 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 24 14:44:49 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 24 14:44:49 localhost kernel: rtc_cmos 00:04: registered as rtc0
Feb 24 14:44:49 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 24 14:44:49 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-24T14:44:48 UTC (1771944288)
Feb 24 14:44:49 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 24 14:44:49 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 24 14:44:49 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 24 14:44:49 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 24 14:44:49 localhost kernel: usbcore: registered new interface driver usbhid
Feb 24 14:44:49 localhost kernel: usbhid: USB HID core driver
Feb 24 14:44:49 localhost kernel: drop_monitor: Initializing network drop monitor service
Feb 24 14:44:49 localhost kernel: Initializing XFRM netlink socket
Feb 24 14:44:49 localhost kernel: NET: Registered PF_INET6 protocol family
Feb 24 14:44:49 localhost kernel: Segment Routing with IPv6
Feb 24 14:44:49 localhost kernel: NET: Registered PF_PACKET protocol family
Feb 24 14:44:49 localhost kernel: mpls_gso: MPLS GSO support
Feb 24 14:44:49 localhost kernel: IPI shorthand broadcast: enabled
Feb 24 14:44:49 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Feb 24 14:44:49 localhost kernel: AES CTR mode by8 optimization enabled
Feb 24 14:44:49 localhost kernel: sched_clock: Marking stable (1107006824, 147228725)->(1369273438, -115037889)
Feb 24 14:44:49 localhost kernel: registered taskstats version 1
Feb 24 14:44:49 localhost kernel: Loading compiled-in X.509 certificates
Feb 24 14:44:49 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4fde1469d8033882223ec575c1a1c1b88c9a497b'
Feb 24 14:44:49 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 24 14:44:49 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 24 14:44:49 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 24 14:44:49 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 24 14:44:49 localhost kernel: Demotion targets for Node 0: null
Feb 24 14:44:49 localhost kernel: page_owner is disabled
Feb 24 14:44:49 localhost kernel: Key type .fscrypt registered
Feb 24 14:44:49 localhost kernel: Key type fscrypt-provisioning registered
Feb 24 14:44:49 localhost kernel: Key type big_key registered
Feb 24 14:44:49 localhost kernel: Key type encrypted registered
Feb 24 14:44:49 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 24 14:44:49 localhost kernel: Loading compiled-in module X.509 certificates
Feb 24 14:44:49 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4fde1469d8033882223ec575c1a1c1b88c9a497b'
Feb 24 14:44:49 localhost kernel: ima: Allocated hash algorithm: sha256
Feb 24 14:44:49 localhost kernel: ima: No architecture policies found
Feb 24 14:44:49 localhost kernel: evm: Initialising EVM extended attributes:
Feb 24 14:44:49 localhost kernel: evm: security.selinux
Feb 24 14:44:49 localhost kernel: evm: security.SMACK64 (disabled)
Feb 24 14:44:49 localhost kernel: evm: security.SMACK64EXEC (disabled)
Feb 24 14:44:49 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 24 14:44:49 localhost kernel: evm: security.SMACK64MMAP (disabled)
Feb 24 14:44:49 localhost kernel: evm: security.apparmor (disabled)
Feb 24 14:44:49 localhost kernel: evm: security.ima
Feb 24 14:44:49 localhost kernel: evm: security.capability
Feb 24 14:44:49 localhost kernel: evm: HMAC attrs: 0x1
Feb 24 14:44:49 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 24 14:44:49 localhost kernel: Running certificate verification RSA selftest
Feb 24 14:44:49 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 24 14:44:49 localhost kernel: Running certificate verification ECDSA selftest
Feb 24 14:44:49 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 24 14:44:49 localhost kernel: clk: Disabling unused clocks
Feb 24 14:44:49 localhost kernel: Freeing unused decrypted memory: 2028K
Feb 24 14:44:49 localhost kernel: Freeing unused kernel image (initmem) memory: 4204K
Feb 24 14:44:49 localhost kernel: Write protecting the kernel read-only data: 30720k
Feb 24 14:44:49 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 388K
Feb 24 14:44:49 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 24 14:44:49 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 24 14:44:49 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Feb 24 14:44:49 localhost kernel: usb 1-1: Manufacturer: QEMU
Feb 24 14:44:49 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 24 14:44:49 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 24 14:44:49 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 24 14:44:49 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 24 14:44:49 localhost kernel: Run /init as init process
Feb 24 14:44:49 localhost kernel:   with arguments:
Feb 24 14:44:49 localhost kernel:     /init
Feb 24 14:44:49 localhost kernel:   with environment:
Feb 24 14:44:49 localhost kernel:     HOME=/
Feb 24 14:44:49 localhost kernel:     TERM=linux
Feb 24 14:44:49 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64
Feb 24 14:44:49 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 24 14:44:49 localhost systemd[1]: Detected virtualization kvm.
Feb 24 14:44:49 localhost systemd[1]: Detected architecture x86-64.
Feb 24 14:44:49 localhost systemd[1]: Running in initrd.
Feb 24 14:44:49 localhost systemd[1]: No hostname configured, using default hostname.
Feb 24 14:44:49 localhost systemd[1]: Hostname set to <localhost>.
Feb 24 14:44:49 localhost systemd[1]: Initializing machine ID from VM UUID.
Feb 24 14:44:49 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Feb 24 14:44:49 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 24 14:44:49 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 24 14:44:49 localhost systemd[1]: Reached target Initrd /usr File System.
Feb 24 14:44:49 localhost systemd[1]: Reached target Local File Systems.
Feb 24 14:44:49 localhost systemd[1]: Reached target Path Units.
Feb 24 14:44:49 localhost systemd[1]: Reached target Slice Units.
Feb 24 14:44:49 localhost systemd[1]: Reached target Swaps.
Feb 24 14:44:49 localhost systemd[1]: Reached target Timer Units.
Feb 24 14:44:49 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 24 14:44:49 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Feb 24 14:44:49 localhost systemd[1]: Listening on Journal Socket.
Feb 24 14:44:49 localhost systemd[1]: Listening on udev Control Socket.
Feb 24 14:44:49 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 24 14:44:49 localhost systemd[1]: Reached target Socket Units.
Feb 24 14:44:49 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 24 14:44:49 localhost systemd[1]: Starting Journal Service...
Feb 24 14:44:49 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 24 14:44:49 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 24 14:44:49 localhost systemd[1]: Starting Create System Users...
Feb 24 14:44:49 localhost systemd[1]: Starting Setup Virtual Console...
Feb 24 14:44:49 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 24 14:44:49 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 24 14:44:49 localhost systemd[1]: Finished Create System Users.
Feb 24 14:44:49 localhost systemd-journald[306]: Journal started
Feb 24 14:44:49 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/dc91584910804855b939c41e7d9bcc71) is 8.0M, max 153.6M, 145.6M free.
Feb 24 14:44:49 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Feb 24 14:44:49 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Feb 24 14:44:49 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 24 14:44:49 localhost systemd[1]: Started Journal Service.
Feb 24 14:44:49 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 24 14:44:49 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 24 14:44:49 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 24 14:44:49 localhost systemd[1]: Finished Setup Virtual Console.
Feb 24 14:44:49 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 24 14:44:49 localhost systemd[1]: Starting dracut cmdline hook...
Feb 24 14:44:49 localhost dracut-cmdline[328]: dracut-9 dracut-057-110.git20260130.el9
Feb 24 14:44:49 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64 root=UUID=9d578f93-c4e9-4172-8459-ef150e54751c ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 24 14:44:49 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 24 14:44:49 localhost systemd[1]: Finished dracut cmdline hook.
Feb 24 14:44:49 localhost systemd[1]: Starting dracut pre-udev hook...
Feb 24 14:44:49 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 24 14:44:49 localhost kernel: device-mapper: uevent: version 1.0.3
Feb 24 14:44:49 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 24 14:44:49 localhost kernel: RPC: Registered named UNIX socket transport module.
Feb 24 14:44:49 localhost kernel: RPC: Registered udp transport module.
Feb 24 14:44:49 localhost kernel: RPC: Registered tcp transport module.
Feb 24 14:44:49 localhost kernel: RPC: Registered tcp-with-tls transport module.
Feb 24 14:44:49 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 24 14:44:49 localhost rpc.statd[447]: Version 2.5.4 starting
Feb 24 14:44:49 localhost rpc.statd[447]: Initializing NSM state
Feb 24 14:44:49 localhost rpc.idmapd[452]: Setting log level to 0
Feb 24 14:44:49 localhost systemd[1]: Finished dracut pre-udev hook.
Feb 24 14:44:49 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 24 14:44:49 localhost systemd-udevd[465]: Using default interface naming scheme 'rhel-9.0'.
Feb 24 14:44:49 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 24 14:44:49 localhost systemd[1]: Starting dracut pre-trigger hook...
Feb 24 14:44:49 localhost systemd[1]: Finished dracut pre-trigger hook.
Feb 24 14:44:49 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 24 14:44:49 localhost systemd[1]: Created slice Slice /system/modprobe.
Feb 24 14:44:49 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 24 14:44:49 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 24 14:44:49 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 24 14:44:49 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 24 14:44:49 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 24 14:44:49 localhost systemd[1]: Reached target Network.
Feb 24 14:44:49 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 24 14:44:49 localhost systemd[1]: Starting dracut initqueue hook...
Feb 24 14:44:49 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 24 14:44:49 localhost systemd-udevd[482]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 14:44:49 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 24 14:44:49 localhost kernel:  vda: vda1
Feb 24 14:44:49 localhost kernel: libata version 3.00 loaded.
Feb 24 14:44:49 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Feb 24 14:44:49 localhost kernel: scsi host0: ata_piix
Feb 24 14:44:49 localhost kernel: scsi host1: ata_piix
Feb 24 14:44:49 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 24 14:44:49 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 24 14:44:49 localhost systemd[1]: Found device /dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c.
Feb 24 14:44:49 localhost kernel: ACPI: bus type drm_connector registered
Feb 24 14:44:49 localhost systemd[1]: Reached target Initrd Root Device.
Feb 24 14:44:50 localhost kernel: ata1: found unknown device (class 0)
Feb 24 14:44:50 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 24 14:44:50 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 24 14:44:50 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 24 14:44:50 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 24 14:44:50 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 24 14:44:50 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 24 14:44:50 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 24 14:44:50 localhost kernel: Console: switching to colour dummy device 80x25
Feb 24 14:44:50 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 24 14:44:50 localhost kernel: [drm] features: -context_init
Feb 24 14:44:50 localhost kernel: [drm] number of scanouts: 1
Feb 24 14:44:50 localhost kernel: [drm] number of cap sets: 0
Feb 24 14:44:50 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 24 14:44:50 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 24 14:44:50 localhost kernel: Console: switching to colour frame buffer device 128x48
Feb 24 14:44:50 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 24 14:44:50 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Feb 24 14:44:50 localhost systemd[1]: Mounting Kernel Configuration File System...
Feb 24 14:44:50 localhost systemd[1]: Mounted Kernel Configuration File System.
Feb 24 14:44:50 localhost systemd[1]: Reached target System Initialization.
Feb 24 14:44:50 localhost systemd[1]: Reached target Basic System.
Feb 24 14:44:50 localhost systemd[1]: Finished dracut initqueue hook.
Feb 24 14:44:50 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Feb 24 14:44:50 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Feb 24 14:44:50 localhost systemd[1]: Reached target Remote File Systems.
Feb 24 14:44:50 localhost systemd[1]: Starting dracut pre-mount hook...
Feb 24 14:44:50 localhost systemd[1]: Finished dracut pre-mount hook.
Feb 24 14:44:50 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c...
Feb 24 14:44:50 localhost systemd-fsck[566]: /usr/sbin/fsck.xfs: XFS file system.
Feb 24 14:44:50 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c.
Feb 24 14:44:50 localhost systemd[1]: Mounting /sysroot...
Feb 24 14:44:50 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 24 14:44:50 localhost kernel: XFS (vda1): Mounting V5 Filesystem 9d578f93-c4e9-4172-8459-ef150e54751c
Feb 24 14:44:50 localhost kernel: XFS (vda1): Ending clean mount
Feb 24 14:44:50 localhost systemd[1]: Mounted /sysroot.
Feb 24 14:44:50 localhost systemd[1]: Reached target Initrd Root File System.
Feb 24 14:44:50 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 24 14:44:50 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 24 14:44:50 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 24 14:44:50 localhost systemd[1]: Reached target Initrd File Systems.
Feb 24 14:44:50 localhost systemd[1]: Reached target Initrd Default Target.
Feb 24 14:44:50 localhost systemd[1]: Starting dracut mount hook...
Feb 24 14:44:50 localhost systemd[1]: Finished dracut mount hook.
Feb 24 14:44:50 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 24 14:44:51 localhost rpc.idmapd[452]: exiting on signal 15
Feb 24 14:44:51 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 24 14:44:51 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 24 14:44:51 localhost systemd[1]: Stopped target Network.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Timer Units.
Feb 24 14:44:51 localhost systemd[1]: dbus.socket: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 24 14:44:51 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Initrd Default Target.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Basic System.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Initrd Root Device.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Initrd /usr File System.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Path Units.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Remote File Systems.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Slice Units.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Socket Units.
Feb 24 14:44:51 localhost systemd[1]: Stopped target System Initialization.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Local File Systems.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Swaps.
Feb 24 14:44:51 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped dracut mount hook.
Feb 24 14:44:51 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped dracut pre-mount hook.
Feb 24 14:44:51 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Feb 24 14:44:51 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 24 14:44:51 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped dracut initqueue hook.
Feb 24 14:44:51 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped Apply Kernel Variables.
Feb 24 14:44:51 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Feb 24 14:44:51 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped Coldplug All udev Devices.
Feb 24 14:44:51 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped dracut pre-trigger hook.
Feb 24 14:44:51 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 24 14:44:51 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped Setup Virtual Console.
Feb 24 14:44:51 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 24 14:44:51 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 24 14:44:51 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Closed udev Control Socket.
Feb 24 14:44:51 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Closed udev Kernel Socket.
Feb 24 14:44:51 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped dracut pre-udev hook.
Feb 24 14:44:51 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped dracut cmdline hook.
Feb 24 14:44:51 localhost systemd[1]: Starting Cleanup udev Database...
Feb 24 14:44:51 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 24 14:44:51 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Feb 24 14:44:51 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Stopped Create System Users.
Feb 24 14:44:51 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Finished Cleanup udev Database.
Feb 24 14:44:51 localhost systemd[1]: Reached target Switch Root.
Feb 24 14:44:51 localhost systemd[1]: Starting Switch Root...
Feb 24 14:44:51 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 24 14:44:51 localhost systemd[1]: Switching root.
Feb 24 14:44:51 localhost systemd-journald[306]: Journal stopped
Feb 24 14:44:52 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Feb 24 14:44:52 localhost kernel: audit: type=1404 audit(1771944291.392:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb 24 14:44:52 localhost kernel: SELinux:  policy capability network_peer_controls=1
Feb 24 14:44:52 localhost kernel: SELinux:  policy capability open_perms=1
Feb 24 14:44:52 localhost kernel: SELinux:  policy capability extended_socket_class=1
Feb 24 14:44:52 localhost kernel: SELinux:  policy capability always_check_network=0
Feb 24 14:44:52 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 24 14:44:52 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 24 14:44:52 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 24 14:44:52 localhost kernel: audit: type=1403 audit(1771944291.503:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 24 14:44:52 localhost systemd[1]: Successfully loaded SELinux policy in 117.562ms.
Feb 24 14:44:52 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.177ms.
Feb 24 14:44:52 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 24 14:44:52 localhost systemd[1]: Detected virtualization kvm.
Feb 24 14:44:52 localhost systemd[1]: Detected architecture x86-64.
Feb 24 14:44:52 localhost systemd-rc-local-generator[648]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 14:44:52 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 24 14:44:52 localhost systemd[1]: Stopped Switch Root.
Feb 24 14:44:52 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 24 14:44:52 localhost systemd[1]: Created slice Slice /system/getty.
Feb 24 14:44:52 localhost systemd[1]: Created slice Slice /system/serial-getty.
Feb 24 14:44:52 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Feb 24 14:44:52 localhost systemd[1]: Created slice User and Session Slice.
Feb 24 14:44:52 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 24 14:44:52 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Feb 24 14:44:52 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb 24 14:44:52 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 24 14:44:52 localhost systemd[1]: Stopped target Switch Root.
Feb 24 14:44:52 localhost systemd[1]: Stopped target Initrd File Systems.
Feb 24 14:44:52 localhost systemd[1]: Stopped target Initrd Root File System.
Feb 24 14:44:52 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Feb 24 14:44:52 localhost systemd[1]: Reached target Path Units.
Feb 24 14:44:52 localhost systemd[1]: Reached target rpc_pipefs.target.
Feb 24 14:44:52 localhost systemd[1]: Reached target Slice Units.
Feb 24 14:44:52 localhost systemd[1]: Reached target Swaps.
Feb 24 14:44:52 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Feb 24 14:44:52 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Feb 24 14:44:52 localhost systemd[1]: Reached target RPC Port Mapper.
Feb 24 14:44:52 localhost systemd[1]: Listening on Process Core Dump Socket.
Feb 24 14:44:52 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Feb 24 14:44:52 localhost systemd[1]: Listening on udev Control Socket.
Feb 24 14:44:52 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 24 14:44:52 localhost systemd[1]: Mounting Huge Pages File System...
Feb 24 14:44:52 localhost systemd[1]: Mounting POSIX Message Queue File System...
Feb 24 14:44:52 localhost systemd[1]: Mounting Kernel Debug File System...
Feb 24 14:44:52 localhost systemd[1]: Mounting Kernel Trace File System...
Feb 24 14:44:52 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 24 14:44:52 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 24 14:44:52 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 24 14:44:52 localhost systemd[1]: Starting Load Kernel Module drm...
Feb 24 14:44:52 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Feb 24 14:44:52 localhost systemd[1]: Starting Load Kernel Module fuse...
Feb 24 14:44:52 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb 24 14:44:52 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 24 14:44:52 localhost systemd[1]: Stopped File System Check on Root Device.
Feb 24 14:44:52 localhost systemd[1]: Stopped Journal Service.
Feb 24 14:44:52 localhost systemd[1]: Starting Journal Service...
Feb 24 14:44:52 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 24 14:44:52 localhost systemd[1]: Starting Generate network units from Kernel command line...
Feb 24 14:44:52 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 24 14:44:52 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Feb 24 14:44:52 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 24 14:44:52 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 24 14:44:52 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb 24 14:44:52 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 24 14:44:52 localhost kernel: fuse: init (API version 7.37)
Feb 24 14:44:52 localhost systemd[1]: Mounted Huge Pages File System.
Feb 24 14:44:52 localhost systemd[1]: Mounted POSIX Message Queue File System.
Feb 24 14:44:52 localhost systemd-journald[696]: Journal started
Feb 24 14:44:52 localhost systemd-journald[696]: Runtime Journal (/run/log/journal/621707288f5710e1387f73ed6d90e964) is 8.0M, max 153.6M, 145.6M free.
Feb 24 14:44:52 localhost systemd[1]: Queued start job for default target Multi-User System.
Feb 24 14:44:52 localhost systemd[1]: Mounted Kernel Debug File System.
Feb 24 14:44:52 localhost systemd[1]: Started Journal Service.
Feb 24 14:44:52 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 24 14:44:52 localhost systemd[1]: Mounted Kernel Trace File System.
Feb 24 14:44:52 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 24 14:44:52 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 24 14:44:52 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 24 14:44:52 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 24 14:44:52 localhost systemd[1]: Finished Load Kernel Module drm.
Feb 24 14:44:52 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 24 14:44:52 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Feb 24 14:44:52 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 24 14:44:52 localhost systemd[1]: Finished Load Kernel Module fuse.
Feb 24 14:44:52 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb 24 14:44:52 localhost systemd[1]: Finished Generate network units from Kernel command line.
Feb 24 14:44:52 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Feb 24 14:44:52 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 24 14:44:52 localhost systemd[1]: Mounting FUSE Control File System...
Feb 24 14:44:52 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 24 14:44:52 localhost systemd[1]: Starting Rebuild Hardware Database...
Feb 24 14:44:52 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Feb 24 14:44:52 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 24 14:44:52 localhost systemd[1]: Starting Load/Save OS Random Seed...
Feb 24 14:44:52 localhost systemd[1]: Starting Create System Users...
Feb 24 14:44:52 localhost systemd[1]: Mounted FUSE Control File System.
Feb 24 14:44:52 localhost systemd-journald[696]: Runtime Journal (/run/log/journal/621707288f5710e1387f73ed6d90e964) is 8.0M, max 153.6M, 145.6M free.
Feb 24 14:44:52 localhost systemd-journald[696]: Received client request to flush runtime journal.
Feb 24 14:44:52 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Feb 24 14:44:52 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 24 14:44:52 localhost systemd[1]: Finished Load/Save OS Random Seed.
Feb 24 14:44:52 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 24 14:44:52 localhost systemd[1]: Finished Create System Users.
Feb 24 14:44:52 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 24 14:44:52 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 24 14:44:52 localhost systemd[1]: Reached target Preparation for Local File Systems.
Feb 24 14:44:52 localhost systemd[1]: Reached target Local File Systems.
Feb 24 14:44:52 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb 24 14:44:52 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb 24 14:44:52 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 24 14:44:52 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb 24 14:44:52 localhost systemd[1]: Starting Automatic Boot Loader Update...
Feb 24 14:44:52 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb 24 14:44:52 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 24 14:44:52 localhost bootctl[715]: Couldn't find EFI system partition, skipping.
Feb 24 14:44:52 localhost systemd[1]: Finished Automatic Boot Loader Update.
Feb 24 14:44:52 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 24 14:44:52 localhost systemd[1]: Starting Security Auditing Service...
Feb 24 14:44:52 localhost systemd[1]: Starting RPC Bind...
Feb 24 14:44:52 localhost systemd[1]: Starting Rebuild Journal Catalog...
Feb 24 14:44:52 localhost auditd[721]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb 24 14:44:52 localhost auditd[721]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb 24 14:44:52 localhost systemd[1]: Finished Rebuild Journal Catalog.
Feb 24 14:44:52 localhost systemd[1]: Started RPC Bind.
Feb 24 14:44:52 localhost augenrules[726]: /sbin/augenrules: No change
Feb 24 14:44:52 localhost augenrules[741]: No rules
Feb 24 14:44:52 localhost augenrules[741]: enabled 1
Feb 24 14:44:52 localhost augenrules[741]: failure 1
Feb 24 14:44:52 localhost augenrules[741]: pid 721
Feb 24 14:44:52 localhost augenrules[741]: rate_limit 0
Feb 24 14:44:52 localhost augenrules[741]: backlog_limit 8192
Feb 24 14:44:52 localhost augenrules[741]: lost 0
Feb 24 14:44:52 localhost augenrules[741]: backlog 0
Feb 24 14:44:52 localhost augenrules[741]: backlog_wait_time 60000
Feb 24 14:44:52 localhost augenrules[741]: backlog_wait_time_actual 0
Feb 24 14:44:52 localhost augenrules[741]: enabled 1
Feb 24 14:44:52 localhost augenrules[741]: failure 1
Feb 24 14:44:52 localhost augenrules[741]: pid 721
Feb 24 14:44:52 localhost augenrules[741]: rate_limit 0
Feb 24 14:44:52 localhost augenrules[741]: backlog_limit 8192
Feb 24 14:44:52 localhost augenrules[741]: lost 0
Feb 24 14:44:52 localhost augenrules[741]: backlog 0
Feb 24 14:44:52 localhost augenrules[741]: backlog_wait_time 60000
Feb 24 14:44:52 localhost augenrules[741]: backlog_wait_time_actual 0
Feb 24 14:44:52 localhost augenrules[741]: enabled 1
Feb 24 14:44:52 localhost augenrules[741]: failure 1
Feb 24 14:44:52 localhost augenrules[741]: pid 721
Feb 24 14:44:52 localhost augenrules[741]: rate_limit 0
Feb 24 14:44:52 localhost augenrules[741]: backlog_limit 8192
Feb 24 14:44:52 localhost augenrules[741]: lost 0
Feb 24 14:44:52 localhost augenrules[741]: backlog 0
Feb 24 14:44:52 localhost augenrules[741]: backlog_wait_time 60000
Feb 24 14:44:52 localhost augenrules[741]: backlog_wait_time_actual 0
Feb 24 14:44:52 localhost systemd[1]: Started Security Auditing Service.
Feb 24 14:44:52 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb 24 14:44:52 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb 24 14:44:52 localhost systemd[1]: Finished Rebuild Hardware Database.
Feb 24 14:44:52 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 24 14:44:52 localhost systemd-udevd[749]: Using default interface naming scheme 'rhel-9.0'.
Feb 24 14:44:52 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb 24 14:44:53 localhost systemd[1]: Starting Update is Completed...
Feb 24 14:44:53 localhost systemd[1]: Finished Update is Completed.
Feb 24 14:44:53 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 24 14:44:53 localhost systemd[1]: Reached target System Initialization.
Feb 24 14:44:53 localhost systemd[1]: Started dnf makecache --timer.
Feb 24 14:44:53 localhost systemd[1]: Started Daily rotation of log files.
Feb 24 14:44:53 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 24 14:44:53 localhost systemd[1]: Reached target Timer Units.
Feb 24 14:44:53 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 24 14:44:53 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb 24 14:44:53 localhost systemd[1]: Reached target Socket Units.
Feb 24 14:44:53 localhost systemd[1]: Starting D-Bus System Message Bus...
Feb 24 14:44:53 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 24 14:44:53 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb 24 14:44:53 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 24 14:44:53 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 24 14:44:53 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 24 14:44:53 localhost systemd-udevd[760]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 14:44:53 localhost systemd[1]: Started D-Bus System Message Bus.
Feb 24 14:44:53 localhost systemd[1]: Reached target Basic System.
Feb 24 14:44:53 localhost dbus-broker-lau[787]: Ready
Feb 24 14:44:53 localhost systemd[1]: Starting NTP client/server...
Feb 24 14:44:53 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb 24 14:44:53 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb 24 14:44:53 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb 24 14:44:53 localhost systemd[1]: Starting IPv4 firewall with iptables...
Feb 24 14:44:53 localhost systemd[1]: Started irqbalance daemon.
Feb 24 14:44:53 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb 24 14:44:53 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 24 14:44:53 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 24 14:44:53 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 24 14:44:53 localhost systemd[1]: Reached target sshd-keygen.target.
Feb 24 14:44:53 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb 24 14:44:53 localhost systemd[1]: Reached target User and Group Name Lookups.
Feb 24 14:44:53 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 24 14:44:53 localhost chronyd[810]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 24 14:44:53 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb 24 14:44:53 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb 24 14:44:53 localhost chronyd[810]: Loaded 0 symmetric keys
Feb 24 14:44:53 localhost chronyd[810]: Using right/UTC timezone to obtain leap second data
Feb 24 14:44:53 localhost chronyd[810]: Loaded seccomp filter (level 2)
Feb 24 14:44:53 localhost systemd[1]: Starting User Login Management...
Feb 24 14:44:53 localhost systemd[1]: Started NTP client/server.
Feb 24 14:44:53 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb 24 14:44:53 localhost systemd-logind[813]: New seat seat0.
Feb 24 14:44:53 localhost systemd-logind[813]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 24 14:44:53 localhost systemd-logind[813]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 24 14:44:53 localhost systemd[1]: Started User Login Management.
Feb 24 14:44:53 localhost kernel: kvm_amd: TSC scaling supported
Feb 24 14:44:53 localhost kernel: kvm_amd: Nested Virtualization enabled
Feb 24 14:44:53 localhost kernel: kvm_amd: Nested Paging enabled
Feb 24 14:44:53 localhost kernel: kvm_amd: LBR virtualization supported
Feb 24 14:44:53 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb 24 14:44:53 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb 24 14:44:53 localhost iptables.init[800]: iptables: Applying firewall rules: [  OK  ]
Feb 24 14:44:53 localhost systemd[1]: Finished IPv4 firewall with iptables.
Feb 24 14:44:53 localhost cloud-init[853]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 24 Feb 2026 14:44:53 +0000. Up 6.39 seconds.
Feb 24 14:44:54 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Feb 24 14:44:54 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Feb 24 14:44:54 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp1ymx16es.mount: Deactivated successfully.
Feb 24 14:44:54 localhost systemd[1]: Starting Hostname Service...
Feb 24 14:44:54 localhost systemd[1]: Started Hostname Service.
Feb 24 14:44:54 np0005628225.novalocal systemd-hostnamed[867]: Hostname set to <np0005628225.novalocal> (static)
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Reached target Preparation for Network.
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Starting Network Manager...
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4215] NetworkManager (version 1.54.3-2.el9) is starting... (boot:fe81dc1b-8858-484f-aedf-ceb94a58f5bc)
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4220] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4405] manager[0x5571b2efb000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4457] hostname: hostname: using hostnamed
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4457] hostname: static hostname changed from (none) to "np0005628225.novalocal"
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4464] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4605] manager[0x5571b2efb000]: rfkill: Wi-Fi hardware radio set enabled
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4606] manager[0x5571b2efb000]: rfkill: WWAN hardware radio set enabled
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4713] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4714] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4715] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4716] manager: Networking is enabled by state file
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4718] settings: Loaded settings plugin: keyfile (internal)
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4754] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4786] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4804] dhcp: init: Using DHCP client 'internal'
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4810] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4832] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4846] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4862] device (lo): Activation: starting connection 'lo' (813eb82a-ec75-4929-9eba-5e76a5ddb15b)
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4874] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4881] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4918] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4925] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4930] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4934] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4937] device (eth0): carrier: link connected
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Started Network Manager.
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4942] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4952] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4961] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Reached target Network.
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4968] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4969] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4974] manager: NetworkManager state is now CONNECTING
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4976] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4988] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.4994] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5159] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5163] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5173] device (lo): Activation: successful, device activated.
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Reached target NFS client services.
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Reached target Remote File Systems.
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5902] dhcp4 (eth0): state changed new lease, address=38.102.83.46
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5915] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5950] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5975] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5977] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5982] manager: NetworkManager state is now CONNECTED_SITE
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5986] device (eth0): Activation: successful, device activated.
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5993] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 24 14:44:54 np0005628225.novalocal NetworkManager[871]: <info>  [1771944294.5998] manager: startup complete
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 24 14:44:54 np0005628225.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Feb 24 14:44:54 np0005628225.novalocal cloud-init[934]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 24 Feb 2026 14:44:54 +0000. Up 7.43 seconds.
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: |  eth0  | True |         38.102.83.46         | 255.255.255.0 | global | fa:16:3e:50:c5:aa |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: |  eth0  | True | fe80::f816:3eff:fe50:c5aa/64 |       .       |  link  | fa:16:3e:50:c5:aa |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb 24 14:44:55 np0005628225.novalocal cloud-init[934]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 24 14:44:56 np0005628225.novalocal useradd[1001]: new group: name=cloud-user, GID=1001
Feb 24 14:44:56 np0005628225.novalocal useradd[1001]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Feb 24 14:44:56 np0005628225.novalocal useradd[1001]: add 'cloud-user' to group 'adm'
Feb 24 14:44:56 np0005628225.novalocal useradd[1001]: add 'cloud-user' to group 'systemd-journal'
Feb 24 14:44:56 np0005628225.novalocal useradd[1001]: add 'cloud-user' to shadow group 'adm'
Feb 24 14:44:56 np0005628225.novalocal useradd[1001]: add 'cloud-user' to shadow group 'systemd-journal'
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: Generating public/private rsa key pair.
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: The key fingerprint is:
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: SHA256:HmmFfD/9RoII2oZZ/Znm5wMhcPYCQ792dYbixYLOkBM root@np0005628225.novalocal
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: The key's randomart image is:
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: +---[RSA 3072]----+
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |       .E        |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |       .+*o. . . |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |        O**.o = o|
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |       * Xo*oX o |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |      + S *o@.o .|
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |       + o +.. + |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |        .   ... o|
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |             o.. |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |              .. |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: +----[SHA256]-----+
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: Generating public/private ecdsa key pair.
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: The key fingerprint is:
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: SHA256:QmzJrKFvBZUqMgVer3PWNfrREmBBlcqSx9wNnBBsLR8 root@np0005628225.novalocal
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: The key's randomart image is:
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: +---[ECDSA 256]---+
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |.. .  +OB.o      |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |. o .=o=.E       |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: | o  oo& =++      |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |o ..oO.*oo+.     |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: | o.+.o=oSo .     |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |   .+. .. o      |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |    o    .       |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |   .             |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |                 |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: +----[SHA256]-----+
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: Generating public/private ed25519 key pair.
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: The key fingerprint is:
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: SHA256:pNWtXek8MERg8Ss9NvrxwQz6A7s5TaJX1RPPV8s6xyE root@np0005628225.novalocal
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: The key's randomart image is:
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: +--[ED25519 256]--+
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |          ++o    |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |         o +   o.|
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |        o . = +o=|
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |       +   + E.==|
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |      . S o O.B +|
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |          o=oO + |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |         .oB. *  |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |        . ++oo . |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: |         .ooo..  |
Feb 24 14:44:56 np0005628225.novalocal cloud-init[934]: +----[SHA256]-----+
Feb 24 14:44:56 np0005628225.novalocal sm-notify[1017]: Version 2.5.4 starting
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Reached target Cloud-config availability.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Reached target Network is Online.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Starting Crash recovery kernel arming...
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Starting System Logging Service...
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Starting OpenSSH server daemon...
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Starting Permit User Sessions...
Feb 24 14:44:56 np0005628225.novalocal sshd[1019]: Server listening on 0.0.0.0 port 22.
Feb 24 14:44:56 np0005628225.novalocal sshd[1019]: Server listening on :: port 22.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Started Notify NFS peers of a restart.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Started OpenSSH server daemon.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Finished Permit User Sessions.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Started Command Scheduler.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Started Getty on tty1.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Started Serial Getty on ttyS0.
Feb 24 14:44:56 np0005628225.novalocal crond[1022]: (CRON) STARTUP (1.5.7)
Feb 24 14:44:56 np0005628225.novalocal crond[1022]: (CRON) INFO (Syslog will be used instead of sendmail.)
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Reached target Login Prompts.
Feb 24 14:44:56 np0005628225.novalocal crond[1022]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 32% if used.)
Feb 24 14:44:56 np0005628225.novalocal crond[1022]: (CRON) INFO (running with inotify support)
Feb 24 14:44:56 np0005628225.novalocal rsyslogd[1018]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1018" x-info="https://www.rsyslog.com"] start
Feb 24 14:44:56 np0005628225.novalocal rsyslogd[1018]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Started System Logging Service.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Reached target Multi-User System.
Feb 24 14:44:56 np0005628225.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Feb 24 14:44:57 np0005628225.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 24 14:44:57 np0005628225.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Feb 24 14:44:57 np0005628225.novalocal rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1147]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 24 Feb 2026 14:44:57 +0000. Up 9.72 seconds.
Feb 24 14:44:57 np0005628225.novalocal kdumpctl[1030]: kdump: No kdump initial ramdisk found.
Feb 24 14:44:57 np0005628225.novalocal kdumpctl[1030]: kdump: Rebuilding /boot/initramfs-5.14.0-681.el9.x86_64kdump.img
Feb 24 14:44:57 np0005628225.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Feb 24 14:44:57 np0005628225.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1358]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 24 Feb 2026 14:44:57 +0000. Up 10.10 seconds.
Feb 24 14:44:57 np0005628225.novalocal sshd-session[1357]: Connection reset by 38.102.83.114 port 51086 [preauth]
Feb 24 14:44:57 np0005628225.novalocal sshd-session[1385]: Unable to negotiate with 38.102.83.114 port 51092: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1395]: #############################################################
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1401]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 24 14:44:57 np0005628225.novalocal sshd-session[1394]: Connection reset by 38.102.83.114 port 51094 [preauth]
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1414]: 256 SHA256:QmzJrKFvBZUqMgVer3PWNfrREmBBlcqSx9wNnBBsLR8 root@np0005628225.novalocal (ECDSA)
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1422]: 256 SHA256:pNWtXek8MERg8Ss9NvrxwQz6A7s5TaJX1RPPV8s6xyE root@np0005628225.novalocal (ED25519)
Feb 24 14:44:57 np0005628225.novalocal sshd-session[1418]: Unable to negotiate with 38.102.83.114 port 51104: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1433]: 3072 SHA256:HmmFfD/9RoII2oZZ/Znm5wMhcPYCQ792dYbixYLOkBM root@np0005628225.novalocal (RSA)
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1437]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1440]: #############################################################
Feb 24 14:44:57 np0005628225.novalocal sshd-session[1431]: Unable to negotiate with 38.102.83.114 port 51114: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Feb 24 14:44:57 np0005628225.novalocal sshd-session[1451]: Connection reset by 38.102.83.114 port 51120 [preauth]
Feb 24 14:44:57 np0005628225.novalocal cloud-init[1358]: Cloud-init v. 24.4-8.el9 finished at Tue, 24 Feb 2026 14:44:57 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.27 seconds
Feb 24 14:44:57 np0005628225.novalocal sshd-session[1463]: Unable to negotiate with 38.102.83.114 port 51136: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Feb 24 14:44:57 np0005628225.novalocal sshd-session[1478]: Unable to negotiate with 38.102.83.114 port 51144: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Feb 24 14:44:57 np0005628225.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Feb 24 14:44:57 np0005628225.novalocal systemd[1]: Reached target Cloud-init target.
Feb 24 14:44:57 np0005628225.novalocal sshd-session[1455]: Connection closed by 38.102.83.114 port 51134 [preauth]
Feb 24 14:44:57 np0005628225.novalocal dracut[1541]: dracut-057-110.git20260130.el9
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-681.el9.x86_64kdump.img 5.14.0-681.el9.x86_64
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 24 14:44:58 np0005628225.novalocal dracut[1543]: Module 'resume' will not be installed, because it's in the list to be omitted!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: memstrack is not available
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 24 14:44:59 np0005628225.novalocal chronyd[810]: Selected source 198.181.199.84 (2.centos.pool.ntp.org)
Feb 24 14:44:59 np0005628225.novalocal chronyd[810]: System clock TAI offset set to 37 seconds
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: memstrack is not available
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: *** Including module: systemd ***
Feb 24 14:44:59 np0005628225.novalocal dracut[1543]: *** Including module: fips ***
Feb 24 14:45:00 np0005628225.novalocal dracut[1543]: *** Including module: systemd-initrd ***
Feb 24 14:45:00 np0005628225.novalocal dracut[1543]: *** Including module: i18n ***
Feb 24 14:45:00 np0005628225.novalocal dracut[1543]: *** Including module: drm ***
Feb 24 14:45:00 np0005628225.novalocal dracut[1543]: *** Including module: prefixdevname ***
Feb 24 14:45:00 np0005628225.novalocal dracut[1543]: *** Including module: kernel-modules ***
Feb 24 14:45:00 np0005628225.novalocal kernel: block vda: the capability attribute has been deprecated.
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: *** Including module: kernel-modules-extra ***
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: *** Including module: qemu ***
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: *** Including module: fstab-sys ***
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: *** Including module: rootfs-block ***
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: *** Including module: terminfo ***
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: *** Including module: udev-rules ***
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: Skipping udev rule: 91-permissions.rules
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: Skipping udev rule: 80-drivers-modprobe.rules
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: *** Including module: virtiofs ***
Feb 24 14:45:01 np0005628225.novalocal dracut[1543]: *** Including module: dracut-systemd ***
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]: *** Including module: usrmount ***
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]: *** Including module: base ***
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]: *** Including module: fs-lib ***
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]: *** Including module: kdumpbase ***
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]: *** Including module: microcode_ctl-fw_dir_override ***
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]:   microcode_ctl module: mangling fw_dir
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel" is ignored
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb 24 14:45:02 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]: *** Including module: openssl ***
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]: *** Including module: shutdown ***
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]: *** Including module: squash ***
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]: *** Including modules done ***
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]: *** Installing kernel module dependencies ***
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: Cannot change IRQ 35 affinity: Operation not permitted
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: IRQ 35 affinity is now unmanaged
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: Cannot change IRQ 33 affinity: Operation not permitted
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: IRQ 33 affinity is now unmanaged
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: Cannot change IRQ 31 affinity: Operation not permitted
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: IRQ 31 affinity is now unmanaged
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: Cannot change IRQ 28 affinity: Operation not permitted
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: IRQ 28 affinity is now unmanaged
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: Cannot change IRQ 34 affinity: Operation not permitted
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: IRQ 34 affinity is now unmanaged
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: Cannot change IRQ 32 affinity: Operation not permitted
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: IRQ 32 affinity is now unmanaged
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: Cannot change IRQ 30 affinity: Operation not permitted
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: IRQ 30 affinity is now unmanaged
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: Cannot change IRQ 29 affinity: Operation not permitted
Feb 24 14:45:03 np0005628225.novalocal irqbalance[801]: IRQ 29 affinity is now unmanaged
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]: *** Installing kernel module dependencies done ***
Feb 24 14:45:03 np0005628225.novalocal dracut[1543]: *** Resolving executable dependencies ***
Feb 24 14:45:04 np0005628225.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 24 14:45:05 np0005628225.novalocal dracut[1543]: *** Resolving executable dependencies done ***
Feb 24 14:45:05 np0005628225.novalocal dracut[1543]: *** Generating early-microcode cpio image ***
Feb 24 14:45:05 np0005628225.novalocal dracut[1543]: *** Store current command line parameters ***
Feb 24 14:45:05 np0005628225.novalocal dracut[1543]: Stored kernel commandline:
Feb 24 14:45:05 np0005628225.novalocal dracut[1543]: No dracut internal kernel commandline stored in the initramfs
Feb 24 14:45:05 np0005628225.novalocal dracut[1543]: *** Install squash loader ***
Feb 24 14:45:06 np0005628225.novalocal dracut[1543]: *** Squashing the files inside the initramfs ***
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: *** Squashing the files inside the initramfs done ***
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: *** Creating image file '/boot/initramfs-5.14.0-681.el9.x86_64kdump.img' ***
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: *** Hardlinking files ***
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: Mode:           real
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: Files:          50
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: Linked:         0 files
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: Compared:       0 xattrs
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: Compared:       0 files
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: Saved:          0 B
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: Duration:       0.000467 seconds
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: *** Hardlinking files done ***
Feb 24 14:45:07 np0005628225.novalocal dracut[1543]: *** Creating initramfs image file '/boot/initramfs-5.14.0-681.el9.x86_64kdump.img' done ***
Feb 24 14:45:08 np0005628225.novalocal kdumpctl[1030]: kdump: kexec: loaded kdump kernel
Feb 24 14:45:08 np0005628225.novalocal kdumpctl[1030]: kdump: Starting kdump: [OK]
Feb 24 14:45:08 np0005628225.novalocal systemd[1]: Finished Crash recovery kernel arming.
Feb 24 14:45:08 np0005628225.novalocal systemd[1]: Startup finished in 1.496s (kernel) + 2.442s (initrd) + 16.789s (userspace) = 20.728s.
Feb 24 14:45:13 np0005628225.novalocal sshd-session[4799]: Accepted publickey for zuul from 38.102.83.114 port 59390 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Feb 24 14:45:13 np0005628225.novalocal systemd[1]: Created slice User Slice of UID 1000.
Feb 24 14:45:13 np0005628225.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb 24 14:45:13 np0005628225.novalocal systemd-logind[813]: New session 1 of user zuul.
Feb 24 14:45:13 np0005628225.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb 24 14:45:13 np0005628225.novalocal systemd[1]: Starting User Manager for UID 1000...
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Queued start job for default target Main User Target.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Created slice User Application Slice.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Started Daily Cleanup of User's Temporary Directories.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Reached target Paths.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Reached target Timers.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Starting D-Bus User Message Bus Socket...
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Starting Create User's Volatile Files and Directories...
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Finished Create User's Volatile Files and Directories.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Listening on D-Bus User Message Bus Socket.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Reached target Sockets.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Reached target Basic System.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Reached target Main User Target.
Feb 24 14:45:13 np0005628225.novalocal systemd[4803]: Startup finished in 128ms.
Feb 24 14:45:13 np0005628225.novalocal systemd[1]: Started User Manager for UID 1000.
Feb 24 14:45:13 np0005628225.novalocal systemd[1]: Started Session 1 of User zuul.
Feb 24 14:45:13 np0005628225.novalocal sshd-session[4799]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 14:45:13 np0005628225.novalocal python3[4886]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 14:45:16 np0005628225.novalocal python3[4914]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 14:45:22 np0005628225.novalocal python3[4972]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 14:45:23 np0005628225.novalocal python3[5012]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb 24 14:45:24 np0005628225.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 24 14:45:25 np0005628225.novalocal python3[5040]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCu7+tJBgpXLz9hvs8dGhopeVbJcXc8GB9J5VfOPlPCzoXOdHXvy8uhjgWE7Phb1N/2OT+0l54icI1P6jCx13v8vann0CcKIX3ShHOTJN+vzG2AeKs3TrHsoj44DdHhdPahvYR+BcekIo6j5QQsrkiaM6itZ729uqmwUWIAh8WcwdnwQF60VH65hofNC8FbsUGJQXqM51qFojNQgo1V0Hvb3b+slI7Ymq1WkRM/xZtgz3y3HaPMTIajH8Tw+1mEpyXbPkbY5ziCoX2g2TDUpdcAMVpwCZCg9yW9FDS6Ebj62OYyTqFLk6urvAAcZ723BBzHy9o8qi7noFB+DBrWmtEqDsP0oM6B5jIpjooHQQT81l4+aO0zUI9X5C8ydUUgT4A1ms41hCtgw4tSO2CROzU/J9qT8uNrkssK5Q8UyVv4w64b5DKNRR1jqAZob7UOynLZzIyK9vzbJqWJXImvUf46LFKpxZC1XWspmu9MtuKXlLHItF5fhrzzZ5Mzji7SqdE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:25 np0005628225.novalocal python3[5064]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:26 np0005628225.novalocal python3[5163]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:45:26 np0005628225.novalocal python3[5234]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771944326.0488524-207-16912821513177/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=c1e1ba2215834a8da99b6f431234d769_id_rsa follow=False checksum=ab4baba21ab0af8d2fb0749257c6023c248acc7f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:27 np0005628225.novalocal python3[5357]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:45:27 np0005628225.novalocal python3[5428]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771944327.0552723-240-15845515446752/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=c1e1ba2215834a8da99b6f431234d769_id_rsa.pub follow=False checksum=398d7b14fa53d6e46bd6ed8679d09f192762f500 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:28 np0005628225.novalocal sshd-session[5454]: error: kex_exchange_identification: read: Connection reset by peer
Feb 24 14:45:28 np0005628225.novalocal sshd-session[5454]: Connection reset by 176.120.22.52 port 31823
Feb 24 14:45:29 np0005628225.novalocal python3[5478]: ansible-ping Invoked with data=pong
Feb 24 14:45:30 np0005628225.novalocal python3[5502]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 14:45:32 np0005628225.novalocal python3[5560]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb 24 14:45:33 np0005628225.novalocal python3[5592]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:33 np0005628225.novalocal python3[5616]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:33 np0005628225.novalocal python3[5640]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:33 np0005628225.novalocal python3[5664]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:34 np0005628225.novalocal python3[5688]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:34 np0005628225.novalocal python3[5712]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:35 np0005628225.novalocal sudo[5736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjyvfhdlvzphgkgnwtcotpvaewkmkfux ; /usr/bin/python3'
Feb 24 14:45:35 np0005628225.novalocal sudo[5736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:36 np0005628225.novalocal python3[5738]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:36 np0005628225.novalocal sudo[5736]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:36 np0005628225.novalocal sudo[5814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppkbduvklyrleiqptomqemvtdpxenjoh ; /usr/bin/python3'
Feb 24 14:45:36 np0005628225.novalocal sudo[5814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:36 np0005628225.novalocal python3[5816]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:45:36 np0005628225.novalocal sudo[5814]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:36 np0005628225.novalocal sudo[5887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhyrqnxbzbdryuwlwlufqxfbhkmkeejr ; /usr/bin/python3'
Feb 24 14:45:36 np0005628225.novalocal sudo[5887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:37 np0005628225.novalocal python3[5889]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771944336.222604-21-66893074572019/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:37 np0005628225.novalocal sudo[5887]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:37 np0005628225.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:38 np0005628225.novalocal python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:38 np0005628225.novalocal python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:38 np0005628225.novalocal python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:38 np0005628225.novalocal python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:39 np0005628225.novalocal python3[6057]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:39 np0005628225.novalocal python3[6081]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:39 np0005628225.novalocal python3[6105]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:39 np0005628225.novalocal python3[6129]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:40 np0005628225.novalocal python3[6153]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:40 np0005628225.novalocal python3[6177]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:40 np0005628225.novalocal python3[6201]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:40 np0005628225.novalocal python3[6225]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWBreHW95Wz2Toz5YwCGQwFcUG8oFYkienDh9tntmDc ralfieri@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:41 np0005628225.novalocal python3[6249]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:41 np0005628225.novalocal python3[6273]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:41 np0005628225.novalocal python3[6297]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:41 np0005628225.novalocal python3[6321]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:42 np0005628225.novalocal python3[6345]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:42 np0005628225.novalocal python3[6369]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:42 np0005628225.novalocal python3[6393]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:42 np0005628225.novalocal python3[6417]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:43 np0005628225.novalocal python3[6441]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:43 np0005628225.novalocal python3[6465]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:43 np0005628225.novalocal python3[6489]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:43 np0005628225.novalocal python3[6513]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:44 np0005628225.novalocal python3[6537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:45:47 np0005628225.novalocal sudo[6561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ximvnbvvqdfyhrrcuhgpfdgymgthxqee ; /usr/bin/python3'
Feb 24 14:45:47 np0005628225.novalocal sudo[6561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:47 np0005628225.novalocal python3[6563]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 24 14:45:47 np0005628225.novalocal systemd[1]: Starting Time & Date Service...
Feb 24 14:45:47 np0005628225.novalocal systemd[1]: Started Time & Date Service.
Feb 24 14:45:48 np0005628225.novalocal systemd-timedated[6565]: Changed time zone to 'UTC' (UTC).
Feb 24 14:45:48 np0005628225.novalocal sudo[6561]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:48 np0005628225.novalocal sudo[6592]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cercncaqgptcsgqsipcfqxpkybznbjtf ; /usr/bin/python3'
Feb 24 14:45:48 np0005628225.novalocal sudo[6592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:48 np0005628225.novalocal python3[6594]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:48 np0005628225.novalocal sudo[6592]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:48 np0005628225.novalocal python3[6670]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:45:49 np0005628225.novalocal python3[6741]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1771944348.7416809-153-168689887465813/source _original_basename=tmpx32ygshf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:49 np0005628225.novalocal python3[6841]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:45:50 np0005628225.novalocal python3[6912]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771944349.711927-183-229919580253173/source _original_basename=tmp4_k1e6kx follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:50 np0005628225.novalocal sudo[7012]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbrpmqxpsgoqkqtetikpkedpvivamaag ; /usr/bin/python3'
Feb 24 14:45:50 np0005628225.novalocal sudo[7012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:51 np0005628225.novalocal python3[7014]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:45:51 np0005628225.novalocal sudo[7012]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:51 np0005628225.novalocal sudo[7085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uilfrofrjruahvetrjnlywdwvhciuhjr ; /usr/bin/python3'
Feb 24 14:45:51 np0005628225.novalocal sudo[7085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:51 np0005628225.novalocal python3[7087]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771944350.8682756-231-201518738724916/source _original_basename=tmp212hfxy_ follow=False checksum=95de52312af74438c55f36bac2bb506be11f0f04 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:51 np0005628225.novalocal sudo[7085]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:52 np0005628225.novalocal python3[7135]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:45:52 np0005628225.novalocal python3[7161]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:45:52 np0005628225.novalocal sudo[7239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npveqfeqpnwoaqczdrehoxjxtbkxxpzb ; /usr/bin/python3'
Feb 24 14:45:52 np0005628225.novalocal sudo[7239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:52 np0005628225.novalocal python3[7241]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:45:52 np0005628225.novalocal sudo[7239]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:53 np0005628225.novalocal sudo[7312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxzhpczsftigfhacydosrhfssugxbneb ; /usr/bin/python3'
Feb 24 14:45:53 np0005628225.novalocal sudo[7312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:53 np0005628225.novalocal python3[7314]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1771944352.5562859-273-75642641143037/source _original_basename=tmp17x2jd_q follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:45:53 np0005628225.novalocal sudo[7312]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:53 np0005628225.novalocal sudo[7362]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfxivuofkqwvqpjnquvxlhjzhswwubtg ; /usr/bin/python3'
Feb 24 14:45:54 np0005628225.novalocal sudo[7362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:45:54 np0005628225.novalocal python3[7365]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-c51e-8692-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:45:54 np0005628225.novalocal sudo[7362]: pam_unix(sudo:session): session closed for user root
Feb 24 14:45:54 np0005628225.novalocal python3[7392]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-c51e-8692-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb 24 14:45:56 np0005628225.novalocal python3[7421]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:46:12 np0005628225.novalocal sudo[7445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcipydrajjknmeawaxfoxajdwqqnmtjt ; /usr/bin/python3'
Feb 24 14:46:12 np0005628225.novalocal sudo[7445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:46:13 np0005628225.novalocal python3[7447]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:46:13 np0005628225.novalocal sudo[7445]: pam_unix(sudo:session): session closed for user root
Feb 24 14:46:18 np0005628225.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 24 14:46:46 np0005628225.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 24 14:46:46 np0005628225.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb 24 14:46:46 np0005628225.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb 24 14:46:46 np0005628225.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb 24 14:46:46 np0005628225.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb 24 14:46:46 np0005628225.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb 24 14:46:46 np0005628225.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb 24 14:46:46 np0005628225.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb 24 14:46:46 np0005628225.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb 24 14:46:46 np0005628225.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.3901] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 24 14:46:46 np0005628225.novalocal systemd-udevd[7450]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4089] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4125] settings: (eth1): created default wired connection 'Wired connection 1'
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4130] device (eth1): carrier: link connected
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4133] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4142] policy: auto-activating connection 'Wired connection 1' (36860a5a-6ecf-323d-93d4-bc747fb83ad9)
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4148] device (eth1): Activation: starting connection 'Wired connection 1' (36860a5a-6ecf-323d-93d4-bc747fb83ad9)
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4149] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4154] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4160] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 14:46:46 np0005628225.novalocal NetworkManager[871]: <info>  [1771944406.4166] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 24 14:46:47 np0005628225.novalocal python3[7477]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-848b-eb8b-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:46:54 np0005628225.novalocal sudo[7555]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pisqwppnskcfsuagtgirtojrgecfaahz ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 24 14:46:54 np0005628225.novalocal sudo[7555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:46:54 np0005628225.novalocal python3[7557]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:46:54 np0005628225.novalocal sudo[7555]: pam_unix(sudo:session): session closed for user root
Feb 24 14:46:54 np0005628225.novalocal sudo[7628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggesxtguejbzxebsqbjuamdhxfuhadac ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 24 14:46:54 np0005628225.novalocal sudo[7628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:46:54 np0005628225.novalocal python3[7630]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771944414.0964677-102-199389747219422/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=1731353bb83ff6ef1fa034ce19cdee42e8f09649 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:46:54 np0005628225.novalocal sudo[7628]: pam_unix(sudo:session): session closed for user root
Feb 24 14:46:55 np0005628225.novalocal sudo[7678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alkvontijcxcmqiyvegfmwjcdzneelch ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 24 14:46:55 np0005628225.novalocal sudo[7678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:46:55 np0005628225.novalocal python3[7680]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Stopped Network Manager Wait Online.
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Stopping Network Manager Wait Online...
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Stopping Network Manager...
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[871]: <info>  [1771944415.6105] caught SIGTERM, shutting down normally.
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[871]: <info>  [1771944415.6116] dhcp4 (eth0): canceled DHCP transaction
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[871]: <info>  [1771944415.6117] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[871]: <info>  [1771944415.6117] dhcp4 (eth0): state changed no lease
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[871]: <info>  [1771944415.6122] manager: NetworkManager state is now CONNECTING
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[871]: <info>  [1771944415.6251] dhcp4 (eth1): canceled DHCP transaction
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[871]: <info>  [1771944415.6252] dhcp4 (eth1): state changed no lease
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[871]: <info>  [1771944415.6301] exiting (success)
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Stopped Network Manager.
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Starting Network Manager...
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.6926] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:fe81dc1b-8858-484f-aedf-ceb94a58f5bc)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.6928] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.7002] manager[0x564781c55000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Starting Hostname Service...
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Started Hostname Service.
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.7955] hostname: hostname: using hostnamed
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.7957] hostname: static hostname changed from (none) to "np0005628225.novalocal"
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.7965] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.7973] manager[0x564781c55000]: rfkill: Wi-Fi hardware radio set enabled
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.7973] manager[0x564781c55000]: rfkill: WWAN hardware radio set enabled
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8031] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8031] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8032] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8033] manager: Networking is enabled by state file
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8037] settings: Loaded settings plugin: keyfile (internal)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8045] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8092] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8110] dhcp: init: Using DHCP client 'internal'
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8114] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8120] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8128] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8141] device (lo): Activation: starting connection 'lo' (813eb82a-ec75-4929-9eba-5e76a5ddb15b)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8149] device (eth0): carrier: link connected
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8155] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8161] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8162] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8169] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8177] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8185] device (eth1): carrier: link connected
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8192] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8201] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (36860a5a-6ecf-323d-93d4-bc747fb83ad9) (indicated)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8203] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8210] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8222] device (eth1): Activation: starting connection 'Wired connection 1' (36860a5a-6ecf-323d-93d4-bc747fb83ad9)
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Started Network Manager.
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8232] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8240] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8244] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8246] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8249] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8253] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8256] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8259] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8263] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8286] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8293] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8305] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8311] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8339] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8348] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8360] device (lo): Activation: successful, device activated.
Feb 24 14:46:55 np0005628225.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8384] dhcp4 (eth0): state changed new lease, address=38.102.83.46
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8399] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8490] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8533] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8538] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8547] manager: NetworkManager state is now CONNECTED_SITE
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8554] device (eth0): Activation: successful, device activated.
Feb 24 14:46:55 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944415.8564] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 24 14:46:55 np0005628225.novalocal sudo[7678]: pam_unix(sudo:session): session closed for user root
Feb 24 14:46:56 np0005628225.novalocal python3[7765]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-848b-eb8b-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:47:05 np0005628225.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 24 14:47:25 np0005628225.novalocal systemd[4803]: Starting Mark boot as successful...
Feb 24 14:47:25 np0005628225.novalocal systemd[4803]: Finished Mark boot as successful.
Feb 24 14:47:25 np0005628225.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.4439] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 24 14:47:41 np0005628225.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 24 14:47:41 np0005628225.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.4892] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.4896] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.4906] device (eth1): Activation: successful, device activated.
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.4922] manager: startup complete
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.4925] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <warn>  [1771944461.4931] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.4942] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb 24 14:47:41 np0005628225.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5031] dhcp4 (eth1): canceled DHCP transaction
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5031] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5032] dhcp4 (eth1): state changed no lease
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5050] policy: auto-activating connection 'ci-private-network' (af387bd7-935d-54e8-ab92-78b300c447ac)
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5057] device (eth1): Activation: starting connection 'ci-private-network' (af387bd7-935d-54e8-ab92-78b300c447ac)
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5058] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5061] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5071] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5082] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5126] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5129] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 14:47:41 np0005628225.novalocal NetworkManager[7690]: <info>  [1771944461.5139] device (eth1): Activation: successful, device activated.
Feb 24 14:47:51 np0005628225.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 24 14:47:55 np0005628225.novalocal sudo[7869]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skyhpslgagsslqufherqksuqxbgurmro ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 24 14:47:55 np0005628225.novalocal sudo[7869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:47:55 np0005628225.novalocal python3[7871]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:47:55 np0005628225.novalocal sudo[7869]: pam_unix(sudo:session): session closed for user root
Feb 24 14:47:55 np0005628225.novalocal sudo[7942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afuaaqtkiduvdelilszgtfhqppquswoi ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 24 14:47:55 np0005628225.novalocal sudo[7942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:47:55 np0005628225.novalocal python3[7944]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771944475.3404112-259-267921992932807/source _original_basename=tmp958wyrc6 follow=False checksum=d21ebded187101b74095c9096f5ed121cdb95617 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:47:55 np0005628225.novalocal sudo[7942]: pam_unix(sudo:session): session closed for user root
Feb 24 14:48:56 np0005628225.novalocal sshd-session[4813]: Received disconnect from 38.102.83.114 port 59390:11: disconnected by user
Feb 24 14:48:56 np0005628225.novalocal sshd-session[4813]: Disconnected from user zuul 38.102.83.114 port 59390
Feb 24 14:48:56 np0005628225.novalocal sshd-session[4799]: pam_unix(sshd:session): session closed for user zuul
Feb 24 14:48:56 np0005628225.novalocal systemd-logind[813]: Session 1 logged out. Waiting for processes to exit.
Feb 24 14:49:00 np0005628225.novalocal sshd-session[7969]: Connection closed by 54.163.33.187 port 40592 [preauth]
Feb 24 14:50:25 np0005628225.novalocal systemd[4803]: Created slice User Background Tasks Slice.
Feb 24 14:50:25 np0005628225.novalocal systemd[4803]: Starting Cleanup of User's Temporary Files and Directories...
Feb 24 14:50:25 np0005628225.novalocal systemd[4803]: Finished Cleanup of User's Temporary Files and Directories.
Feb 24 14:54:08 np0005628225.novalocal sshd-session[7974]: Connection closed by authenticating user root 185.156.73.233 port 57020 [preauth]
Feb 24 14:56:41 np0005628225.novalocal sshd-session[7978]: Accepted publickey for zuul from 38.102.83.114 port 47872 ssh2: RSA SHA256:NJTfdsSIVB6mH9/ClrbKw1e6GvsHFWYkptASszhoj5w
Feb 24 14:56:41 np0005628225.novalocal systemd-logind[813]: New session 3 of user zuul.
Feb 24 14:56:41 np0005628225.novalocal systemd[1]: Started Session 3 of User zuul.
Feb 24 14:56:41 np0005628225.novalocal sshd-session[7978]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 14:56:41 np0005628225.novalocal sudo[8005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqycjifpjozgjjmdgeagiyczpjhfvzmj ; /usr/bin/python3'
Feb 24 14:56:41 np0005628225.novalocal sudo[8005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:41 np0005628225.novalocal python3[8007]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-517a-e564-000000002253-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:56:41 np0005628225.novalocal sudo[8005]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:41 np0005628225.novalocal sudo[8034]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnyoesftrdpmzqqodllvqagwhfgseqkj ; /usr/bin/python3'
Feb 24 14:56:41 np0005628225.novalocal sudo[8034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:41 np0005628225.novalocal python3[8036]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:56:41 np0005628225.novalocal sudo[8034]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:42 np0005628225.novalocal sudo[8060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gliprfswwonkiplhtrabyqebwcmzoxdd ; /usr/bin/python3'
Feb 24 14:56:42 np0005628225.novalocal sudo[8060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:42 np0005628225.novalocal python3[8062]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:56:42 np0005628225.novalocal sudo[8060]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:42 np0005628225.novalocal sudo[8086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebpcbfaslajcfbqkurmadujbqqakgyxo ; /usr/bin/python3'
Feb 24 14:56:42 np0005628225.novalocal sudo[8086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:42 np0005628225.novalocal python3[8088]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:56:42 np0005628225.novalocal sudo[8086]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:42 np0005628225.novalocal sudo[8112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdrccrcmwthyhnjythkhhngejmultztb ; /usr/bin/python3'
Feb 24 14:56:42 np0005628225.novalocal sudo[8112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:42 np0005628225.novalocal python3[8114]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:56:42 np0005628225.novalocal sudo[8112]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:43 np0005628225.novalocal sudo[8138]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvtjmjjwytqfmuctmxkrvjfcsqlndzrn ; /usr/bin/python3'
Feb 24 14:56:43 np0005628225.novalocal sudo[8138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:43 np0005628225.novalocal python3[8140]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:56:43 np0005628225.novalocal sudo[8138]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:43 np0005628225.novalocal sudo[8216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebmzqbhrwbnfgatyblajanxwajuudzcq ; /usr/bin/python3'
Feb 24 14:56:43 np0005628225.novalocal sudo[8216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:43 np0005628225.novalocal python3[8218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:56:43 np0005628225.novalocal sudo[8216]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:44 np0005628225.novalocal sudo[8289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dppjeyqgtzrlccslyqkwggibljibauur ; /usr/bin/python3'
Feb 24 14:56:44 np0005628225.novalocal sudo[8289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:44 np0005628225.novalocal python3[8291]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771945003.5489326-533-108962969060285/source _original_basename=tmpht6mkkom follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:56:44 np0005628225.novalocal sudo[8289]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:44 np0005628225.novalocal sudo[8339]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfocwolalopqeukeycdwgscvnoiaiyda ; /usr/bin/python3'
Feb 24 14:56:44 np0005628225.novalocal sudo[8339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:45 np0005628225.novalocal python3[8341]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 14:56:45 np0005628225.novalocal systemd[1]: Reloading.
Feb 24 14:56:45 np0005628225.novalocal systemd-rc-local-generator[8364]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 14:56:45 np0005628225.novalocal sudo[8339]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:46 np0005628225.novalocal sudo[8401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjjfajlhqkpgnubfnvchwlcglqpjwjnb ; /usr/bin/python3'
Feb 24 14:56:46 np0005628225.novalocal sudo[8401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:46 np0005628225.novalocal python3[8403]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb 24 14:56:46 np0005628225.novalocal sudo[8401]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:47 np0005628225.novalocal sudo[8427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otkskpfgkwrwnowchfllfugcxorfudqj ; /usr/bin/python3'
Feb 24 14:56:47 np0005628225.novalocal sudo[8427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:47 np0005628225.novalocal python3[8429]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:56:47 np0005628225.novalocal sudo[8427]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:47 np0005628225.novalocal sudo[8455]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbvvwqqpdupdssmckfxrwdhjpbmpkjfv ; /usr/bin/python3'
Feb 24 14:56:47 np0005628225.novalocal sudo[8455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:47 np0005628225.novalocal python3[8457]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:56:47 np0005628225.novalocal sudo[8455]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:47 np0005628225.novalocal sudo[8483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmhxorkxeexhpeqzheyfpwxbtazanmfb ; /usr/bin/python3'
Feb 24 14:56:47 np0005628225.novalocal sudo[8483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:47 np0005628225.novalocal python3[8485]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:56:47 np0005628225.novalocal sudo[8483]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:47 np0005628225.novalocal sudo[8511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duxvmtydqbuvsmzrjtldkkjyhhywwiun ; /usr/bin/python3'
Feb 24 14:56:47 np0005628225.novalocal sudo[8511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:48 np0005628225.novalocal python3[8513]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:56:48 np0005628225.novalocal sudo[8511]: pam_unix(sudo:session): session closed for user root
Feb 24 14:56:48 np0005628225.novalocal python3[8540]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-517a-e564-00000000225a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:56:49 np0005628225.novalocal python3[8570]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 24 14:56:51 np0005628225.novalocal sshd-session[7981]: Connection closed by 38.102.83.114 port 47872
Feb 24 14:56:51 np0005628225.novalocal sshd-session[7978]: pam_unix(sshd:session): session closed for user zuul
Feb 24 14:56:51 np0005628225.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Feb 24 14:56:51 np0005628225.novalocal systemd[1]: session-3.scope: Consumed 4.151s CPU time.
Feb 24 14:56:51 np0005628225.novalocal systemd-logind[813]: Session 3 logged out. Waiting for processes to exit.
Feb 24 14:56:51 np0005628225.novalocal systemd-logind[813]: Removed session 3.
Feb 24 14:56:52 np0005628225.novalocal sshd-session[8574]: Accepted publickey for zuul from 38.102.83.114 port 44848 ssh2: RSA SHA256:NJTfdsSIVB6mH9/ClrbKw1e6GvsHFWYkptASszhoj5w
Feb 24 14:56:52 np0005628225.novalocal systemd-logind[813]: New session 4 of user zuul.
Feb 24 14:56:52 np0005628225.novalocal systemd[1]: Started Session 4 of User zuul.
Feb 24 14:56:52 np0005628225.novalocal sshd-session[8574]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 14:56:52 np0005628225.novalocal sudo[8601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hajrbhgyrboclppzbidpmoephbzagnzp ; /usr/bin/python3'
Feb 24 14:56:52 np0005628225.novalocal sudo[8601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:56:53 np0005628225.novalocal python3[8603]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 24 14:57:13 np0005628225.novalocal setsebool[8658]: The virt_use_nfs policy boolean was changed to 1 by root
Feb 24 14:57:13 np0005628225.novalocal setsebool[8658]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb 24 14:57:24 np0005628225.novalocal kernel: SELinux:  Converting 385 SID table entries...
Feb 24 14:57:24 np0005628225.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 24 14:57:24 np0005628225.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 24 14:57:24 np0005628225.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 24 14:57:24 np0005628225.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 24 14:57:24 np0005628225.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 24 14:57:24 np0005628225.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 24 14:57:24 np0005628225.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 24 14:57:33 np0005628225.novalocal kernel: SELinux:  Converting 388 SID table entries...
Feb 24 14:57:33 np0005628225.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 24 14:57:33 np0005628225.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 24 14:57:33 np0005628225.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 24 14:57:33 np0005628225.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 24 14:57:33 np0005628225.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 24 14:57:33 np0005628225.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 24 14:57:33 np0005628225.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 24 14:57:51 np0005628225.novalocal dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 24 14:57:51 np0005628225.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 24 14:57:51 np0005628225.novalocal systemd[1]: Starting man-db-cache-update.service...
Feb 24 14:57:51 np0005628225.novalocal systemd[1]: Reloading.
Feb 24 14:57:51 np0005628225.novalocal systemd-rc-local-generator[9443]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 14:57:51 np0005628225.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Feb 24 14:57:52 np0005628225.novalocal sudo[8601]: pam_unix(sudo:session): session closed for user root
Feb 24 14:57:53 np0005628225.novalocal python3[10631]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-4c67-21da-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 14:57:54 np0005628225.novalocal kernel: evm: overlay not supported
Feb 24 14:57:54 np0005628225.novalocal systemd[4803]: Starting D-Bus User Message Bus...
Feb 24 14:57:54 np0005628225.novalocal dbus-broker-launch[11669]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 24 14:57:54 np0005628225.novalocal dbus-broker-launch[11669]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 24 14:57:54 np0005628225.novalocal systemd[4803]: Started D-Bus User Message Bus.
Feb 24 14:57:54 np0005628225.novalocal dbus-broker-lau[11669]: Ready
Feb 24 14:57:54 np0005628225.novalocal systemd[4803]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 24 14:57:54 np0005628225.novalocal systemd[4803]: Created slice Slice /user.
Feb 24 14:57:54 np0005628225.novalocal systemd[4803]: podman-11524.scope: unit configures an IP firewall, but not running as root.
Feb 24 14:57:54 np0005628225.novalocal systemd[4803]: (This warning is only shown for the first unit using IP firewalling.)
Feb 24 14:57:54 np0005628225.novalocal systemd[4803]: Started podman-11524.scope.
Feb 24 14:57:54 np0005628225.novalocal systemd[4803]: Started podman-pause-b615e2e9.scope.
Feb 24 14:57:54 np0005628225.novalocal sudo[12341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zskbzrwasvrvlpsfbbysdttobrfzabtk ; /usr/bin/python3'
Feb 24 14:57:54 np0005628225.novalocal sudo[12341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:57:55 np0005628225.novalocal python3[12371]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.69:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.69:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:57:55 np0005628225.novalocal python3[12371]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb 24 14:57:55 np0005628225.novalocal sudo[12341]: pam_unix(sudo:session): session closed for user root
Feb 24 14:57:55 np0005628225.novalocal sshd-session[8577]: Connection closed by 38.102.83.114 port 44848
Feb 24 14:57:55 np0005628225.novalocal sshd-session[8574]: pam_unix(sshd:session): session closed for user zuul
Feb 24 14:57:55 np0005628225.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Feb 24 14:57:55 np0005628225.novalocal systemd[1]: session-4.scope: Consumed 52.318s CPU time.
Feb 24 14:57:55 np0005628225.novalocal systemd-logind[813]: Session 4 logged out. Waiting for processes to exit.
Feb 24 14:57:55 np0005628225.novalocal systemd-logind[813]: Removed session 4.
Feb 24 14:58:14 np0005628225.novalocal sshd-session[22709]: Connection closed by 38.102.83.66 port 39122 [preauth]
Feb 24 14:58:14 np0005628225.novalocal sshd-session[22719]: Connection closed by 38.102.83.66 port 39130 [preauth]
Feb 24 14:58:14 np0005628225.novalocal sshd-session[22717]: Unable to negotiate with 38.102.83.66 port 39146: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 24 14:58:14 np0005628225.novalocal sshd-session[22713]: Unable to negotiate with 38.102.83.66 port 39156: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 24 14:58:14 np0005628225.novalocal sshd-session[22721]: Unable to negotiate with 38.102.83.66 port 39162: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 24 14:58:18 np0005628225.novalocal sshd-session[24301]: Accepted publickey for zuul from 38.102.83.114 port 50524 ssh2: RSA SHA256:NJTfdsSIVB6mH9/ClrbKw1e6GvsHFWYkptASszhoj5w
Feb 24 14:58:18 np0005628225.novalocal systemd-logind[813]: New session 5 of user zuul.
Feb 24 14:58:18 np0005628225.novalocal systemd[1]: Started Session 5 of User zuul.
Feb 24 14:58:18 np0005628225.novalocal sshd-session[24301]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 14:58:18 np0005628225.novalocal python3[24426]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOL+pvxxUIwu/QqgfN70/qcft7PqrFEtGX4tYQVzFS5iATWn/4Ihz+og/k5a7/Xtp5kv7m+xKBkMGdiWTewOE4o= zuul@np0005628224.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:58:19 np0005628225.novalocal sudo[24610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyxccfkejqrxyfkboheoaensukyqaden ; /usr/bin/python3'
Feb 24 14:58:19 np0005628225.novalocal sudo[24610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:58:19 np0005628225.novalocal python3[24620]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOL+pvxxUIwu/QqgfN70/qcft7PqrFEtGX4tYQVzFS5iATWn/4Ihz+og/k5a7/Xtp5kv7m+xKBkMGdiWTewOE4o= zuul@np0005628224.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:58:19 np0005628225.novalocal sudo[24610]: pam_unix(sudo:session): session closed for user root
Feb 24 14:58:19 np0005628225.novalocal sudo[24985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umlazoqwronmigymfjpgutxpedhpsrgv ; /usr/bin/python3'
Feb 24 14:58:19 np0005628225.novalocal sudo[24985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:58:20 np0005628225.novalocal python3[24996]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005628225.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 24 14:58:20 np0005628225.novalocal useradd[25077]: new group: name=cloud-admin, GID=1002
Feb 24 14:58:20 np0005628225.novalocal useradd[25077]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Feb 24 14:58:20 np0005628225.novalocal sudo[24985]: pam_unix(sudo:session): session closed for user root
Feb 24 14:58:20 np0005628225.novalocal sudo[25217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uftkzinztzsxpdgqoepdvghxdgpfxhbt ; /usr/bin/python3'
Feb 24 14:58:20 np0005628225.novalocal sudo[25217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:58:20 np0005628225.novalocal python3[25228]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOL+pvxxUIwu/QqgfN70/qcft7PqrFEtGX4tYQVzFS5iATWn/4Ihz+og/k5a7/Xtp5kv7m+xKBkMGdiWTewOE4o= zuul@np0005628224.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 24 14:58:20 np0005628225.novalocal sudo[25217]: pam_unix(sudo:session): session closed for user root
Feb 24 14:58:20 np0005628225.novalocal sudo[25510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khlmkmuefmpxfhxwzsgqbtavyzhlsupy ; /usr/bin/python3'
Feb 24 14:58:20 np0005628225.novalocal sudo[25510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:58:21 np0005628225.novalocal python3[25517]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 14:58:21 np0005628225.novalocal sudo[25510]: pam_unix(sudo:session): session closed for user root
Feb 24 14:58:21 np0005628225.novalocal sudo[25805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srqigfjpluolcaowtayxnlsqbosjahpj ; /usr/bin/python3'
Feb 24 14:58:21 np0005628225.novalocal sudo[25805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:58:21 np0005628225.novalocal python3[25815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771945100.8005326-135-150384346486519/source _original_basename=tmp7uonrwkk follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 14:58:21 np0005628225.novalocal sudo[25805]: pam_unix(sudo:session): session closed for user root
Feb 24 14:58:22 np0005628225.novalocal sudo[26168]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-powuylaqrwakcnepkblwobaonfvjqigk ; /usr/bin/python3'
Feb 24 14:58:22 np0005628225.novalocal sudo[26168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 14:58:22 np0005628225.novalocal python3[26178]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb 24 14:58:22 np0005628225.novalocal systemd[1]: Starting Hostname Service...
Feb 24 14:58:22 np0005628225.novalocal systemd[1]: Started Hostname Service.
Feb 24 14:58:22 np0005628225.novalocal systemd-hostnamed[26319]: Changed pretty hostname to 'compute-0'
Feb 24 14:58:22 compute-0 systemd-hostnamed[26319]: Hostname set to <compute-0> (static)
Feb 24 14:58:22 compute-0 NetworkManager[7690]: <info>  [1771945102.4744] hostname: static hostname changed from "np0005628225.novalocal" to "compute-0"
Feb 24 14:58:22 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 24 14:58:22 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 24 14:58:22 compute-0 sudo[26168]: pam_unix(sudo:session): session closed for user root
Feb 24 14:58:22 compute-0 sshd-session[24368]: Connection closed by 38.102.83.114 port 50524
Feb 24 14:58:22 compute-0 sshd-session[24301]: pam_unix(sshd:session): session closed for user zuul
Feb 24 14:58:22 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Feb 24 14:58:22 compute-0 systemd[1]: session-5.scope: Consumed 2.386s CPU time.
Feb 24 14:58:22 compute-0 systemd-logind[813]: Session 5 logged out. Waiting for processes to exit.
Feb 24 14:58:22 compute-0 systemd-logind[813]: Removed session 5.
Feb 24 14:58:32 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 24 14:58:32 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 24 14:58:32 compute-0 systemd[1]: man-db-cache-update.service: Consumed 48.418s CPU time.
Feb 24 14:58:32 compute-0 systemd[1]: run-r0289e898902a4d03b17dd3846c4777f0.service: Deactivated successfully.
Feb 24 14:58:32 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 24 14:58:52 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 24 15:00:25 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Feb 24 15:00:25 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb 24 15:00:25 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Feb 24 15:00:25 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb 24 15:01:01 compute-0 CROND[30504]: (root) CMD (run-parts /etc/cron.hourly)
Feb 24 15:01:01 compute-0 run-parts[30507]: (/etc/cron.hourly) starting 0anacron
Feb 24 15:01:02 compute-0 anacron[30515]: Anacron started on 2026-02-24
Feb 24 15:01:02 compute-0 anacron[30515]: Will run job `cron.daily' in 30 min.
Feb 24 15:01:02 compute-0 anacron[30515]: Will run job `cron.weekly' in 50 min.
Feb 24 15:01:02 compute-0 anacron[30515]: Will run job `cron.monthly' in 70 min.
Feb 24 15:01:02 compute-0 anacron[30515]: Jobs will be executed sequentially
Feb 24 15:01:02 compute-0 run-parts[30517]: (/etc/cron.hourly) finished 0anacron
Feb 24 15:01:02 compute-0 CROND[30503]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 24 15:02:47 compute-0 sshd-session[30519]: Accepted publickey for zuul from 38.102.83.66 port 32818 ssh2: RSA SHA256:NJTfdsSIVB6mH9/ClrbKw1e6GvsHFWYkptASszhoj5w
Feb 24 15:02:47 compute-0 systemd-logind[813]: New session 6 of user zuul.
Feb 24 15:02:47 compute-0 systemd[1]: Started Session 6 of User zuul.
Feb 24 15:02:47 compute-0 sshd-session[30519]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:02:48 compute-0 python3[30595]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:02:49 compute-0 sudo[30709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-updrgnfznvloehzyaruvqgbkxllhjtmq ; /usr/bin/python3'
Feb 24 15:02:49 compute-0 sudo[30709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:49 compute-0 python3[30711]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 15:02:49 compute-0 sudo[30709]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:50 compute-0 sudo[30782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmiuaamaovagjfgwkgllmnthggbaoyer ; /usr/bin/python3'
Feb 24 15:02:50 compute-0 sudo[30782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:50 compute-0 python3[30784]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771945369.4980805-34264-66988408391646/source mode=0755 _original_basename=delorean.repo follow=False checksum=c7624fe5e858d4139de1ac159778eb6fd097c2ca backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:02:50 compute-0 sudo[30782]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:50 compute-0 sudo[30808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igmijjduzjmqilugoquhztsiutobavey ; /usr/bin/python3'
Feb 24 15:02:50 compute-0 sudo[30808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:50 compute-0 python3[30810]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 15:02:50 compute-0 sudo[30808]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:50 compute-0 sudo[30881]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hesquqkhvlxrdxqiqrefzfvtjfuzfhds ; /usr/bin/python3'
Feb 24 15:02:50 compute-0 sudo[30881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:50 compute-0 python3[30883]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771945369.4980805-34264-66988408391646/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:02:50 compute-0 sudo[30881]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:50 compute-0 sudo[30907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcdfudjxgstarsbvsorklpzexecavfyz ; /usr/bin/python3'
Feb 24 15:02:50 compute-0 sudo[30907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:50 compute-0 python3[30909]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 15:02:50 compute-0 sudo[30907]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:51 compute-0 sudo[30980]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jflurxkqbwbdkhqjzpiqklnuvosgvjcd ; /usr/bin/python3'
Feb 24 15:02:51 compute-0 sudo[30980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:51 compute-0 python3[30982]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771945369.4980805-34264-66988408391646/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:02:51 compute-0 sudo[30980]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:51 compute-0 sudo[31006]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwtynriytjqctrzwrawixnjehoqkmaxk ; /usr/bin/python3'
Feb 24 15:02:51 compute-0 sudo[31006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:51 compute-0 python3[31008]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 15:02:51 compute-0 sudo[31006]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:51 compute-0 sudo[31079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yenxuoqzfklqerpymssolwdkbywxcbkz ; /usr/bin/python3'
Feb 24 15:02:51 compute-0 sudo[31079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:51 compute-0 python3[31081]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771945369.4980805-34264-66988408391646/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:02:51 compute-0 sudo[31079]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:51 compute-0 sudo[31105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzarwoticafbupvmzlpbraqlbxsgddvy ; /usr/bin/python3'
Feb 24 15:02:51 compute-0 sudo[31105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:52 compute-0 python3[31107]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 15:02:52 compute-0 sudo[31105]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:52 compute-0 sudo[31178]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugpeczkzpjyobifsalwahrxvoppqjcmd ; /usr/bin/python3'
Feb 24 15:02:52 compute-0 sudo[31178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:52 compute-0 python3[31180]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771945369.4980805-34264-66988408391646/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:02:52 compute-0 sudo[31178]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:52 compute-0 sudo[31204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozxfexutiskipbtpewxtxlwtcjosnjjq ; /usr/bin/python3'
Feb 24 15:02:52 compute-0 sudo[31204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:52 compute-0 python3[31206]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 15:02:52 compute-0 sudo[31204]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:52 compute-0 sudo[31277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqxgkqumkhkjumpqgdgpcptckfdpigtd ; /usr/bin/python3'
Feb 24 15:02:52 compute-0 sudo[31277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:53 compute-0 python3[31279]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771945369.4980805-34264-66988408391646/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:02:53 compute-0 sudo[31277]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:53 compute-0 sudo[31303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnzwfjmxxolbxotqceenrpgmuzmshang ; /usr/bin/python3'
Feb 24 15:02:53 compute-0 sudo[31303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:53 compute-0 python3[31305]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 24 15:02:53 compute-0 sudo[31303]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:53 compute-0 sudo[31376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfcdqyawtglverghefrbepdbeobqaymg ; /usr/bin/python3'
Feb 24 15:02:53 compute-0 sudo[31376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:02:53 compute-0 python3[31378]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771945369.4980805-34264-66988408391646/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=06a0a916cb7cbc51b08d6616a672f1322305cccf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:02:53 compute-0 sudo[31376]: pam_unix(sudo:session): session closed for user root
Feb 24 15:02:56 compute-0 sshd-session[31403]: Connection closed by 192.168.122.11 port 34446 [preauth]
Feb 24 15:02:56 compute-0 sshd-session[31404]: Connection closed by 192.168.122.11 port 34460 [preauth]
Feb 24 15:02:56 compute-0 sshd-session[31405]: Unable to negotiate with 192.168.122.11 port 34476: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 24 15:02:56 compute-0 sshd-session[31406]: Unable to negotiate with 192.168.122.11 port 34480: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 24 15:02:56 compute-0 sshd-session[31407]: Unable to negotiate with 192.168.122.11 port 34486: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 24 15:03:45 compute-0 sshd-session[31413]: Invalid user test from 80.94.95.116 port 22120
Feb 24 15:03:45 compute-0 sshd-session[31413]: Connection closed by invalid user test 80.94.95.116 port 22120 [preauth]
Feb 24 15:05:39 compute-0 python3[31439]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:09:29 compute-0 sshd-session[31443]: banner exchange: Connection from 91.238.181.93 port 65499: invalid format
Feb 24 15:10:39 compute-0 sshd-session[30522]: Received disconnect from 38.102.83.66 port 32818:11: disconnected by user
Feb 24 15:10:39 compute-0 sshd-session[30522]: Disconnected from user zuul 38.102.83.66 port 32818
Feb 24 15:10:39 compute-0 sshd-session[30519]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:10:39 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Feb 24 15:10:39 compute-0 systemd[1]: session-6.scope: Consumed 4.507s CPU time.
Feb 24 15:10:39 compute-0 systemd-logind[813]: Session 6 logged out. Waiting for processes to exit.
Feb 24 15:10:39 compute-0 systemd-logind[813]: Removed session 6.
Feb 24 15:15:04 compute-0 sshd-session[31446]: Connection closed by authenticating user operator 80.94.95.115 port 47764 [preauth]
Feb 24 15:18:55 compute-0 sshd-session[31448]: Accepted publickey for zuul from 192.168.122.30 port 50824 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:18:55 compute-0 systemd-logind[813]: New session 7 of user zuul.
Feb 24 15:18:55 compute-0 systemd[1]: Started Session 7 of User zuul.
Feb 24 15:18:55 compute-0 sshd-session[31448]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:18:57 compute-0 python3.9[31601]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:18:58 compute-0 sudo[31780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwwynfrzstpahzwivlsoahqywynpgtrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946337.7619326-27-168553573654042/AnsiballZ_command.py'
Feb 24 15:18:58 compute-0 sudo[31780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:18:58 compute-0 python3.9[31783]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:19:05 compute-0 sudo[31780]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:05 compute-0 sshd-session[31451]: Connection closed by 192.168.122.30 port 50824
Feb 24 15:19:05 compute-0 sshd-session[31448]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:19:05 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Feb 24 15:19:05 compute-0 systemd[1]: session-7.scope: Consumed 7.001s CPU time.
Feb 24 15:19:05 compute-0 systemd-logind[813]: Session 7 logged out. Waiting for processes to exit.
Feb 24 15:19:05 compute-0 systemd-logind[813]: Removed session 7.
Feb 24 15:19:11 compute-0 sshd-session[31842]: Accepted publickey for zuul from 192.168.122.30 port 55062 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:19:11 compute-0 systemd-logind[813]: New session 8 of user zuul.
Feb 24 15:19:11 compute-0 systemd[1]: Started Session 8 of User zuul.
Feb 24 15:19:11 compute-0 sshd-session[31842]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:19:12 compute-0 python3.9[31995]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:19:13 compute-0 sshd-session[31845]: Connection closed by 192.168.122.30 port 55062
Feb 24 15:19:13 compute-0 sshd-session[31842]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:19:13 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Feb 24 15:19:13 compute-0 systemd-logind[813]: Session 8 logged out. Waiting for processes to exit.
Feb 24 15:19:13 compute-0 systemd-logind[813]: Removed session 8.
Feb 24 15:19:28 compute-0 sshd-session[32023]: Accepted publickey for zuul from 192.168.122.30 port 38622 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:19:28 compute-0 systemd-logind[813]: New session 9 of user zuul.
Feb 24 15:19:28 compute-0 systemd[1]: Started Session 9 of User zuul.
Feb 24 15:19:28 compute-0 sshd-session[32023]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:19:29 compute-0 python3.9[32176]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 24 15:19:30 compute-0 python3.9[32350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:19:31 compute-0 sudo[32500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxygxuwfywpmxlopskogoihufveutxfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946371.078073-40-232110915254338/AnsiballZ_command.py'
Feb 24 15:19:31 compute-0 sudo[32500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:31 compute-0 python3.9[32503]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:19:31 compute-0 sudo[32500]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:32 compute-0 sudo[32654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cesnowauapzkikmsyriwxnesrdnoahnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946372.1025987-52-75396766500673/AnsiballZ_stat.py'
Feb 24 15:19:32 compute-0 sudo[32654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:32 compute-0 python3.9[32657]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:19:32 compute-0 sudo[32654]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:33 compute-0 sudo[32807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypjomwngmqzupafpefhpoeyenbpslnam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946372.9140294-60-100847666383771/AnsiballZ_file.py'
Feb 24 15:19:33 compute-0 sudo[32807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:33 compute-0 python3.9[32810]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:19:33 compute-0 sudo[32807]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:33 compute-0 sudo[32960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqnzdminnyvmgppypworwnyiwgsowcht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946373.6988695-68-246395685858660/AnsiballZ_stat.py'
Feb 24 15:19:33 compute-0 sudo[32960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:34 compute-0 python3.9[32963]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:19:34 compute-0 sudo[32960]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:34 compute-0 sudo[33084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjviffebewdtmejzynsxhkttnsapquqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946373.6988695-68-246395685858660/AnsiballZ_copy.py'
Feb 24 15:19:34 compute-0 sudo[33084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:34 compute-0 python3.9[33087]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946373.6988695-68-246395685858660/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:19:34 compute-0 sudo[33084]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:35 compute-0 sudo[33237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwppisxzcojfhtgylyrvxfpaiqplxlce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946374.8844137-83-192498370934918/AnsiballZ_setup.py'
Feb 24 15:19:35 compute-0 sudo[33237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:35 compute-0 python3.9[33240]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:19:35 compute-0 sudo[33237]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:36 compute-0 sudo[33394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtjohbzzatkohlziofccpjfhpemirdcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946375.8230174-91-127886391298906/AnsiballZ_file.py'
Feb 24 15:19:36 compute-0 sudo[33394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:36 compute-0 python3.9[33397]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:19:36 compute-0 sudo[33394]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:36 compute-0 sudo[33547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqvuvagjbsmivtzhvvbfrtdguhcjzoom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946376.4563832-100-72204930346054/AnsiballZ_file.py'
Feb 24 15:19:36 compute-0 sudo[33547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:36 compute-0 python3.9[33550]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:19:36 compute-0 sudo[33547]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:37 compute-0 python3.9[33700]: ansible-ansible.builtin.service_facts Invoked
Feb 24 15:19:40 compute-0 python3.9[33954]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:19:40 compute-0 python3.9[34104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:19:42 compute-0 python3.9[34258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:19:42 compute-0 sudo[34414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufxuomtcglxtjqsowxxtovevvobbdmnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946382.7159016-148-109498637805246/AnsiballZ_setup.py'
Feb 24 15:19:42 compute-0 sudo[34414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:43 compute-0 python3.9[34417]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:19:43 compute-0 sudo[34414]: pam_unix(sudo:session): session closed for user root
Feb 24 15:19:43 compute-0 sudo[34499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upqhjrotdsoijexapovboqznavaldlzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946382.7159016-148-109498637805246/AnsiballZ_dnf.py'
Feb 24 15:19:43 compute-0 sudo[34499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:19:44 compute-0 python3.9[34502]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:20:28 compute-0 systemd[1]: Reloading.
Feb 24 15:20:28 compute-0 systemd-rc-local-generator[34693]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:20:28 compute-0 systemd[1]: Starting dnf makecache...
Feb 24 15:20:28 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb 24 15:20:28 compute-0 dnf[34718]: Failed determining last makecache time.
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-barbican-42b4c41831408a8e323 115 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-python-glean-642fffe0203a8ffcc2443db52 169 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-cinder-e95a374f4f00ef02d562d 160 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-python-stevedore-c4acc5639fd2329372142 162 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 systemd[1]: Reloading.
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-python-cloudkitty-tests-tempest-ef9563 162 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-diskimage-builder-cbb4478c143869181ba9 162 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 systemd-rc-local-generator[34755]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-nova-5cfeecbf22fca58822607dd 156 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-python-designate-tests-tempest-347fdbc 150 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-glance-1fd12c29b339f30fe823e 168 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 173 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-manila-8fa2b5793100022b4d0f6 149 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-python-whitebox-neutron-tests-tempest- 141 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-octavia-76dfc1e35cf7f4dd6102 162 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-watcher-c014f81a8647287f6dcc 148 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-python-tcib-b403f1051724db0286e1418f59 154 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 systemd[1]: Reloading.
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 171 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-swift-dc98a8463506ac520c469a 127 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 systemd-rc-local-generator[34813]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-python-tempestconf-8e33668cda707818ee1 132 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 dnf[34718]: delorean-openstack-heat-ui-013accbfd179753bc3f0 127 kB/s | 3.0 kB     00:00
Feb 24 15:20:28 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Feb 24 15:20:28 compute-0 dnf[34718]: CentOS Stream 9 - BaseOS                         62 kB/s | 6.1 kB     00:00
Feb 24 15:20:28 compute-0 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Feb 24 15:20:28 compute-0 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Feb 24 15:20:28 compute-0 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Feb 24 15:20:28 compute-0 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Feb 24 15:20:29 compute-0 dnf[34718]: CentOS Stream 9 - AppStream                      53 kB/s | 6.5 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: CentOS Stream 9 - CRB                            63 kB/s | 6.0 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: CentOS Stream 9 - Extras packages                79 kB/s | 7.6 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: dlrn-antelope-testing                           157 kB/s | 3.0 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: dlrn-antelope-build-deps                        157 kB/s | 3.0 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: centos9-rabbitmq                                 57 kB/s | 3.0 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: centos9-storage                                  73 kB/s | 3.0 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: centos9-opstools                                 35 kB/s | 3.0 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: NFV SIG OpenvSwitch                             123 kB/s | 3.0 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: repo-setup-centos-appstream                     152 kB/s | 4.4 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: repo-setup-centos-baseos                        148 kB/s | 3.9 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: repo-setup-centos-highavailability              172 kB/s | 3.9 kB     00:00
Feb 24 15:20:29 compute-0 dnf[34718]: repo-setup-centos-powertools                    175 kB/s | 4.3 kB     00:00
Feb 24 15:20:30 compute-0 dnf[34718]: Extra Packages for Enterprise Linux 9 - x86_64  188 kB/s |  24 kB     00:00
Feb 24 15:20:30 compute-0 dnf[34718]: Metadata cache created.
Feb 24 15:20:30 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 24 15:20:30 compute-0 systemd[1]: Finished dnf makecache.
Feb 24 15:20:30 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.606s CPU time.
Feb 24 15:21:15 compute-0 sshd-session[35051]: Connection closed by 120.48.56.86 port 37048
Feb 24 15:21:23 compute-0 kernel: SELinux:  Converting 2727 SID table entries...
Feb 24 15:21:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 24 15:21:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 24 15:21:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 24 15:21:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 24 15:21:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 24 15:21:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 24 15:21:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 24 15:21:23 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb 24 15:21:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 24 15:21:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 24 15:21:23 compute-0 systemd[1]: Reloading.
Feb 24 15:21:23 compute-0 systemd-rc-local-generator[35163]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:21:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 24 15:21:24 compute-0 sudo[34499]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 24 15:21:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 24 15:21:24 compute-0 systemd[1]: run-r8d2ab5f416f1445187c8b62feb41d306.service: Deactivated successfully.
Feb 24 15:21:24 compute-0 sudo[36096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahrlceeigsujrdcmbsudoadgghbxkheo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946484.5147214-160-20958846987113/AnsiballZ_command.py'
Feb 24 15:21:24 compute-0 sudo[36096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:24 compute-0 python3.9[36099]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:21:25 compute-0 sudo[36096]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:26 compute-0 sudo[36378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njmldmzdhettjqldwdzcefdumlvzutyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946486.0652895-168-79131278135842/AnsiballZ_selinux.py'
Feb 24 15:21:26 compute-0 sudo[36378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:27 compute-0 python3.9[36381]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 24 15:21:27 compute-0 sudo[36378]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:27 compute-0 sudo[36531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oupxmajfjiynsqvjoqvpivnipkyojjup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946487.4174697-179-52165909127570/AnsiballZ_command.py'
Feb 24 15:21:27 compute-0 sudo[36531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:27 compute-0 python3.9[36534]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 24 15:21:28 compute-0 sudo[36531]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:28 compute-0 sudo[36685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtfhsgjbquaeakpfnwcwbcyipomnavqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946488.5438282-187-41337901506808/AnsiballZ_file.py'
Feb 24 15:21:28 compute-0 sudo[36685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:31 compute-0 python3.9[36688]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:21:31 compute-0 sudo[36685]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:32 compute-0 sudo[36838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpobuptvgemudmvbjshwbmjtxjttjnue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946492.026034-195-275492503805453/AnsiballZ_mount.py'
Feb 24 15:21:32 compute-0 sudo[36838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:32 compute-0 python3.9[36841]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 24 15:21:32 compute-0 sudo[36838]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:33 compute-0 sudo[36991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjsnhrysvqmzoadbxprkfnouxvbwchvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946493.5417273-223-11441071502111/AnsiballZ_file.py'
Feb 24 15:21:33 compute-0 sudo[36991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:33 compute-0 python3.9[36994]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:21:33 compute-0 sudo[36991]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:34 compute-0 sudo[37144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaattklhylkcyhitqzopvpzdggbblihn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946494.0953178-231-271684290276305/AnsiballZ_stat.py'
Feb 24 15:21:34 compute-0 sudo[37144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:34 compute-0 python3.9[37147]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:21:34 compute-0 sudo[37144]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:34 compute-0 sudo[37268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqgwycujsdfkqtfmoontchcgnitdhtwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946494.0953178-231-271684290276305/AnsiballZ_copy.py'
Feb 24 15:21:34 compute-0 sudo[37268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:35 compute-0 python3.9[37271]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946494.0953178-231-271684290276305/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=390fc204118afdd868ab9921a76d5494ac65a62e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:21:35 compute-0 sudo[37268]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:35 compute-0 sudo[37423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wayymmhzwyyqpbgvdiktltrbwbfksuvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946495.5076303-255-266078406856901/AnsiballZ_stat.py'
Feb 24 15:21:35 compute-0 sudo[37423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:35 compute-0 python3.9[37426]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:21:35 compute-0 sudo[37423]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:36 compute-0 sudo[37576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycvnzbdaekwqyccemidonlqqenjgvhbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946496.092575-263-181261523521758/AnsiballZ_command.py'
Feb 24 15:21:36 compute-0 sudo[37576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:36 compute-0 python3.9[37579]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:21:36 compute-0 sudo[37576]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:36 compute-0 sudo[37730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evohjixwyktnjqcsqlfojaozznudlxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946496.7462113-271-273505922689081/AnsiballZ_file.py'
Feb 24 15:21:36 compute-0 sudo[37730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:37 compute-0 python3.9[37733]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:21:37 compute-0 sudo[37730]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:37 compute-0 sudo[37883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnukldhziiqfnunpnmqnxrgpqarzanoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946497.5577352-282-199562423350671/AnsiballZ_getent.py'
Feb 24 15:21:37 compute-0 sudo[37883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:38 compute-0 python3.9[37886]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 24 15:21:38 compute-0 sudo[37883]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:38 compute-0 rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:21:38 compute-0 sshd-session[37319]: Received disconnect from 120.48.56.86 port 55280:11:  [preauth]
Feb 24 15:21:38 compute-0 sshd-session[37319]: Disconnected from authenticating user root 120.48.56.86 port 55280 [preauth]
Feb 24 15:21:38 compute-0 sudo[38038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcidrvotwxoxowxtwbamgzqctsxixvls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946498.2874691-290-40917059109703/AnsiballZ_group.py'
Feb 24 15:21:38 compute-0 sudo[38038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:38 compute-0 python3.9[38041]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 24 15:21:38 compute-0 groupadd[38042]: group added to /etc/group: name=qemu, GID=107
Feb 24 15:21:38 compute-0 groupadd[38042]: group added to /etc/gshadow: name=qemu
Feb 24 15:21:38 compute-0 groupadd[38042]: new group: name=qemu, GID=107
Feb 24 15:21:38 compute-0 sudo[38038]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:39 compute-0 sudo[38197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utrduchqsnkttjrycjnerfjzagfztjry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946499.0648031-298-250328676413022/AnsiballZ_user.py'
Feb 24 15:21:39 compute-0 sudo[38197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:39 compute-0 python3.9[38200]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 24 15:21:39 compute-0 useradd[38202]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/1
Feb 24 15:21:39 compute-0 sudo[38197]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:40 compute-0 sudo[38358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byuexcmbieurvetvcmduulshvuviwxce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946499.9485166-306-257108803170051/AnsiballZ_getent.py'
Feb 24 15:21:40 compute-0 sudo[38358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:40 compute-0 python3.9[38361]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 24 15:21:40 compute-0 sudo[38358]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:40 compute-0 sudo[38512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uotgrpojpgxbgplagbhbqnosbklfjueq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946500.548735-314-250473438221552/AnsiballZ_group.py'
Feb 24 15:21:40 compute-0 sudo[38512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:40 compute-0 python3.9[38515]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 24 15:21:41 compute-0 groupadd[38516]: group added to /etc/group: name=hugetlbfs, GID=42477
Feb 24 15:21:41 compute-0 groupadd[38516]: group added to /etc/gshadow: name=hugetlbfs
Feb 24 15:21:41 compute-0 groupadd[38516]: new group: name=hugetlbfs, GID=42477
Feb 24 15:21:41 compute-0 sudo[38512]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:41 compute-0 sudo[38671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gikosuqabajgvabldwzfcnnlufixpvax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946501.2766342-323-128217842958497/AnsiballZ_file.py'
Feb 24 15:21:41 compute-0 sudo[38671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:41 compute-0 python3.9[38674]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 24 15:21:41 compute-0 sudo[38671]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:42 compute-0 sudo[38824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lugbyzkefxjimwhifyiumjhywnlxzvct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946502.0143986-334-26594598451094/AnsiballZ_dnf.py'
Feb 24 15:21:42 compute-0 sudo[38824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:42 compute-0 python3.9[38827]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:21:44 compute-0 sudo[38824]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:45 compute-0 sudo[38978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nubulekqksnevhzlckxqsnhzjutjelut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946504.9914415-342-276158416708903/AnsiballZ_file.py'
Feb 24 15:21:45 compute-0 sudo[38978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:45 compute-0 python3.9[38981]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:21:45 compute-0 sudo[38978]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:45 compute-0 sudo[39131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcgmrczopcfazmgyzukycnboikbyjqms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946505.6989026-350-215906445911189/AnsiballZ_stat.py'
Feb 24 15:21:45 compute-0 sudo[39131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:46 compute-0 python3.9[39134]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:21:46 compute-0 sudo[39131]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:46 compute-0 sudo[39255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-botcxbwxygczhdzktjxgwjuamifzqtse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946505.6989026-350-215906445911189/AnsiballZ_copy.py'
Feb 24 15:21:46 compute-0 sudo[39255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:46 compute-0 python3.9[39258]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946505.6989026-350-215906445911189/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:21:46 compute-0 sudo[39255]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:47 compute-0 sudo[39408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ragnuaydxeigzdjhhxtkydfxslbwiklo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946506.9383245-365-116008161071838/AnsiballZ_systemd.py'
Feb 24 15:21:47 compute-0 sudo[39408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:47 compute-0 python3.9[39411]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:21:47 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 24 15:21:47 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 24 15:21:47 compute-0 kernel: Bridge firewalling registered
Feb 24 15:21:47 compute-0 systemd-modules-load[39415]: Inserted module 'br_netfilter'
Feb 24 15:21:47 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 24 15:21:47 compute-0 sudo[39408]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:48 compute-0 sudo[39569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltfnvlfxwmnswjhcmmguqtzxnaufotjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946508.1279364-373-155105715330727/AnsiballZ_stat.py'
Feb 24 15:21:48 compute-0 sudo[39569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:48 compute-0 python3.9[39572]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:21:48 compute-0 sudo[39569]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:48 compute-0 sudo[39693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfokiczdklunjapuslltqmqdspthcymy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946508.1279364-373-155105715330727/AnsiballZ_copy.py'
Feb 24 15:21:48 compute-0 sudo[39693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:49 compute-0 python3.9[39696]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946508.1279364-373-155105715330727/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:21:49 compute-0 sudo[39693]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:49 compute-0 sudo[39846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpbhldvijpjjbpuzucvczqykeebjdakj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946509.490753-391-273370866183959/AnsiballZ_dnf.py'
Feb 24 15:21:49 compute-0 sudo[39846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:21:49 compute-0 python3.9[39849]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:21:56 compute-0 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Feb 24 15:21:56 compute-0 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Feb 24 15:21:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 24 15:21:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 24 15:21:57 compute-0 systemd[1]: Reloading.
Feb 24 15:21:57 compute-0 systemd-rc-local-generator[39912]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:21:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 24 15:21:57 compute-0 sudo[39846]: pam_unix(sudo:session): session closed for user root
Feb 24 15:21:58 compute-0 python3.9[41650]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:21:59 compute-0 python3.9[42707]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 24 15:22:00 compute-0 python3.9[43448]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:22:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 24 15:22:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 24 15:22:00 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.364s CPU time.
Feb 24 15:22:00 compute-0 systemd[1]: run-r1cde3000673448d08aa2c7bce22440bf.service: Deactivated successfully.
Feb 24 15:22:00 compute-0 sudo[44099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeqegfeilpbornxgxhkekyzlneimnrou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946520.4330416-430-177285709126117/AnsiballZ_command.py'
Feb 24 15:22:00 compute-0 sudo[44099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:00 compute-0 python3.9[44102]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:22:00 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 24 15:22:01 compute-0 systemd[1]: Starting Authorization Manager...
Feb 24 15:22:01 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 24 15:22:01 compute-0 polkitd[44319]: Started polkitd version 0.117
Feb 24 15:22:01 compute-0 polkitd[44319]: Loading rules from directory /etc/polkit-1/rules.d
Feb 24 15:22:01 compute-0 polkitd[44319]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 24 15:22:01 compute-0 polkitd[44319]: Finished loading, compiling and executing 2 rules
Feb 24 15:22:01 compute-0 systemd[1]: Started Authorization Manager.
Feb 24 15:22:01 compute-0 polkitd[44319]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 24 15:22:01 compute-0 sudo[44099]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:01 compute-0 sudo[44487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkgemslatvfsuxkypzzfubmtanxvzgpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946521.6479678-439-43740059347707/AnsiballZ_systemd.py'
Feb 24 15:22:01 compute-0 sudo[44487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:02 compute-0 python3.9[44490]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:22:02 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 24 15:22:02 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Feb 24 15:22:02 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 24 15:22:02 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 24 15:22:02 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 24 15:22:02 compute-0 sudo[44487]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:03 compute-0 python3.9[44652]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 24 15:22:05 compute-0 sudo[44802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyxgcdyodxovrwhyzglgxxnjougjhwph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946524.7738156-496-224152565903621/AnsiballZ_systemd.py'
Feb 24 15:22:05 compute-0 sudo[44802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:05 compute-0 python3.9[44805]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:22:05 compute-0 systemd[1]: Reloading.
Feb 24 15:22:05 compute-0 systemd-rc-local-generator[44825]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:22:05 compute-0 sudo[44802]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:05 compute-0 sudo[45000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbriiobwcwazaiyyplrfeoottdeakqwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946525.7540169-496-49123814830061/AnsiballZ_systemd.py'
Feb 24 15:22:05 compute-0 sudo[45000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:06 compute-0 python3.9[45003]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:22:06 compute-0 systemd[1]: Reloading.
Feb 24 15:22:06 compute-0 systemd-rc-local-generator[45030]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:22:06 compute-0 sudo[45000]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:06 compute-0 sudo[45197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkzuqywvqorekovvrofzakyskwkacddo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946526.716006-512-8187716405054/AnsiballZ_command.py'
Feb 24 15:22:06 compute-0 sudo[45197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:07 compute-0 python3.9[45200]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:22:07 compute-0 sudo[45197]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:07 compute-0 sudo[45351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcqjvphdimeuvnvqpwyvdvbzninbwylj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946527.2817457-520-209007858758178/AnsiballZ_command.py'
Feb 24 15:22:07 compute-0 sudo[45351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:07 compute-0 python3.9[45354]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:22:07 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb 24 15:22:07 compute-0 sudo[45351]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:08 compute-0 sudo[45505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpmzfvnmnvazupnsfitdrycdxoramwuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946527.8535128-528-147162574529459/AnsiballZ_command.py'
Feb 24 15:22:08 compute-0 sudo[45505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:08 compute-0 python3.9[45508]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:22:09 compute-0 sudo[45505]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:10 compute-0 sudo[45668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkhkoaetlwolsvyohxirnziyzxvnjcfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946529.8380394-536-141504275466780/AnsiballZ_command.py'
Feb 24 15:22:10 compute-0 sudo[45668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:10 compute-0 python3.9[45671]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:22:10 compute-0 sudo[45668]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:10 compute-0 sudo[45822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcmxxzrlvpkbddavzkhknznkybmbojem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946530.4175968-544-210372038492050/AnsiballZ_systemd.py'
Feb 24 15:22:10 compute-0 sudo[45822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:10 compute-0 python3.9[45825]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:22:11 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 24 15:22:11 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Feb 24 15:22:11 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Feb 24 15:22:11 compute-0 systemd[1]: Starting Apply Kernel Variables...
Feb 24 15:22:11 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 24 15:22:11 compute-0 systemd[1]: Finished Apply Kernel Variables.
Feb 24 15:22:11 compute-0 sudo[45822]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:11 compute-0 sshd-session[32026]: Connection closed by 192.168.122.30 port 38622
Feb 24 15:22:11 compute-0 sshd-session[32023]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:22:11 compute-0 systemd-logind[813]: Session 9 logged out. Waiting for processes to exit.
Feb 24 15:22:11 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Feb 24 15:22:11 compute-0 systemd[1]: session-9.scope: Consumed 2min 1.992s CPU time.
Feb 24 15:22:11 compute-0 systemd-logind[813]: Removed session 9.
Feb 24 15:22:16 compute-0 sshd-session[45855]: Accepted publickey for zuul from 192.168.122.30 port 43990 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:22:16 compute-0 systemd-logind[813]: New session 10 of user zuul.
Feb 24 15:22:16 compute-0 systemd[1]: Started Session 10 of User zuul.
Feb 24 15:22:16 compute-0 sshd-session[45855]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:22:17 compute-0 python3.9[46008]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:22:19 compute-0 python3.9[46162]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:22:20 compute-0 sudo[46316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sncvxttqvzswfgdnzpvykvjwztznpuwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946539.618281-45-8260103904352/AnsiballZ_command.py'
Feb 24 15:22:20 compute-0 sudo[46316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:20 compute-0 python3.9[46319]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:22:20 compute-0 sudo[46316]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:21 compute-0 python3.9[46470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:22:21 compute-0 sudo[46624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loquvpnuxkffvczqebwvyuxgkbggxzus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946541.5419478-65-57819437279334/AnsiballZ_setup.py'
Feb 24 15:22:21 compute-0 sudo[46624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:22 compute-0 python3.9[46627]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:22:22 compute-0 sudo[46624]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:22 compute-0 sudo[46709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojwmcltoekwsomwrrzcedvkndhsrjrng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946541.5419478-65-57819437279334/AnsiballZ_dnf.py'
Feb 24 15:22:22 compute-0 sudo[46709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:22 compute-0 python3.9[46712]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:22:24 compute-0 sudo[46709]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:24 compute-0 sudo[46863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slhqlonbiotctcvvnxjuffmupqefdqjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946544.2656124-77-44610838101103/AnsiballZ_setup.py'
Feb 24 15:22:24 compute-0 sudo[46863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:24 compute-0 python3.9[46866]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:22:24 compute-0 sudo[46863]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:25 compute-0 sudo[47035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbfzchwrybxawgxnexwcqrjvgmuemdrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946545.1114411-88-188627753569845/AnsiballZ_file.py'
Feb 24 15:22:25 compute-0 sudo[47035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:25 compute-0 python3.9[47038]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:22:25 compute-0 sudo[47035]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:26 compute-0 sudo[47188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbeehacoavijhqqsoqrmbxolazbiqwbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946545.9024858-96-100456808286096/AnsiballZ_command.py'
Feb 24 15:22:26 compute-0 sudo[47188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:26 compute-0 python3.9[47191]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2032704653-merged.mount: Deactivated successfully.
Feb 24 15:22:26 compute-0 podman[47192]: 2026-02-24 15:22:26.37266551 +0000 UTC m=+0.047030617 system refresh
Feb 24 15:22:26 compute-0 sudo[47188]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:26 compute-0 sudo[47352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvvczhvhbwzlhnlucdirayoptfuzgxmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946546.546145-104-179977914844790/AnsiballZ_stat.py'
Feb 24 15:22:26 compute-0 sudo[47352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:27 compute-0 python3.9[47355]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:22:27 compute-0 sudo[47352]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:22:27 compute-0 sudo[47476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrljyvohtbovyucbathjezkjustnwkzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946546.546145-104-179977914844790/AnsiballZ_copy.py'
Feb 24 15:22:27 compute-0 sudo[47476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:27 compute-0 python3.9[47479]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946546.546145-104-179977914844790/.source.json follow=False _original_basename=podman_network_config.j2 checksum=e738a99c28a734399e276e8785bb35f32ce7801b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:22:27 compute-0 sudo[47476]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:28 compute-0 sudo[47629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uilalnganzbvsbjqmrubxyxjnzkqgdbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946548.1053436-119-184496635995292/AnsiballZ_stat.py'
Feb 24 15:22:28 compute-0 sudo[47629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:28 compute-0 python3.9[47632]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:22:28 compute-0 sudo[47629]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:28 compute-0 sudo[47753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqfbhwptrvikkbiotmhvaeewwsyclcbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946548.1053436-119-184496635995292/AnsiballZ_copy.py'
Feb 24 15:22:28 compute-0 sudo[47753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:29 compute-0 python3.9[47756]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946548.1053436-119-184496635995292/.source.conf follow=False _original_basename=registries.conf.j2 checksum=79ce8ee83554e83d41b0100244ad5e6f36da75cc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:22:29 compute-0 sudo[47753]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:29 compute-0 sudo[47906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjfxorfhwkqzpaivuuhmlfrjzlcgtmsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946549.3556724-135-44495102718465/AnsiballZ_ini_file.py'
Feb 24 15:22:29 compute-0 sudo[47906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:29 compute-0 python3.9[47909]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:22:29 compute-0 sudo[47906]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:30 compute-0 sudo[48059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyhsfrmjzhsuwkmzmlecxjlwwjvcgcds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946550.046608-135-8173678943916/AnsiballZ_ini_file.py'
Feb 24 15:22:30 compute-0 sudo[48059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:30 compute-0 python3.9[48062]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:22:30 compute-0 sudo[48059]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:30 compute-0 sudo[48212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wklbvriwqfwggmdriczjsbldepgcvuqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946550.6608493-135-203510824972275/AnsiballZ_ini_file.py'
Feb 24 15:22:30 compute-0 sudo[48212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:31 compute-0 python3.9[48215]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:22:31 compute-0 sudo[48212]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:31 compute-0 sudo[48365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebbcrazgkahfenmxaxrfqpmjbdkrqxkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946551.1544716-135-146512582139229/AnsiballZ_ini_file.py'
Feb 24 15:22:31 compute-0 sudo[48365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:31 compute-0 python3.9[48368]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:22:31 compute-0 sudo[48365]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:32 compute-0 python3.9[48518]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:22:33 compute-0 sudo[48670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkkxjjfitklhbkcvzkjnoredscazsumi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946552.784366-175-179402752520476/AnsiballZ_dnf.py'
Feb 24 15:22:33 compute-0 sudo[48670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:33 compute-0 python3.9[48673]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:22:34 compute-0 sudo[48670]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:34 compute-0 sudo[48824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcgpspenprwwmuzhvcsavaymbgjqqvds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946554.6999285-183-145017958135943/AnsiballZ_dnf.py'
Feb 24 15:22:34 compute-0 sudo[48824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:35 compute-0 python3.9[48827]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:22:36 compute-0 sudo[48824]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:37 compute-0 sudo[48986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfxuvuyhadxuhsuuswoahnymjokgfcee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946557.273688-193-150276453391463/AnsiballZ_dnf.py'
Feb 24 15:22:37 compute-0 sudo[48986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:37 compute-0 python3.9[48989]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:22:38 compute-0 sudo[48986]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:39 compute-0 sudo[49140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xapxphjhpdevzreyxwtefvbiyflzahzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946559.2064166-202-242646797621753/AnsiballZ_dnf.py'
Feb 24 15:22:39 compute-0 sudo[49140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:39 compute-0 python3.9[49143]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:22:40 compute-0 sudo[49140]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:41 compute-0 sudo[49294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsgqrezvksbyhpuqnfrvituunwcrbouo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946561.1708465-213-18720779869700/AnsiballZ_dnf.py'
Feb 24 15:22:41 compute-0 sudo[49294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:41 compute-0 python3.9[49297]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:22:43 compute-0 sudo[49294]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:43 compute-0 sudo[49451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyygvbkfmrgjwzdgvinnmgxjvotzhyct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946563.3722436-221-57760137383695/AnsiballZ_dnf.py'
Feb 24 15:22:43 compute-0 sudo[49451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:43 compute-0 python3.9[49454]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:22:48 compute-0 sudo[49451]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:48 compute-0 sudo[49622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwhiacwlojbzgmhjqlgusaawfyfucllo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946568.6317596-230-26188443608784/AnsiballZ_dnf.py'
Feb 24 15:22:48 compute-0 sudo[49622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:49 compute-0 python3.9[49625]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:22:50 compute-0 sudo[49622]: pam_unix(sudo:session): session closed for user root
Feb 24 15:22:50 compute-0 sudo[49776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwgjapsoomejzdmnnxeuowhptsjxpec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946570.5660481-239-67773094008713/AnsiballZ_dnf.py'
Feb 24 15:22:50 compute-0 sudo[49776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:22:51 compute-0 python3.9[49779]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:23:00 compute-0 sudo[49776]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:01 compute-0 sudo[50113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muuxblcyujqwcuzgwaswsninmiauipvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946581.1121101-248-142390190932568/AnsiballZ_dnf.py'
Feb 24 15:23:01 compute-0 sudo[50113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:01 compute-0 python3.9[50116]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:23:02 compute-0 sudo[50113]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:03 compute-0 sudo[50270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keimxczypmkhyhmskpzaeybampxrlgdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946583.1111271-258-79559246643948/AnsiballZ_dnf.py'
Feb 24 15:23:03 compute-0 sudo[50270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:03 compute-0 python3.9[50273]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:23:05 compute-0 sudo[50270]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:05 compute-0 sudo[50430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uycceuycjxdkbivdjqhfdqizbhrrrwng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946585.4776149-269-642061281517/AnsiballZ_file.py'
Feb 24 15:23:05 compute-0 sudo[50430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:05 compute-0 python3.9[50433]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:23:05 compute-0 sudo[50430]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:06 compute-0 sudo[50606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zckvrtxcldbmdscumcweasqpncbjmsid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946586.0266092-277-206148224714265/AnsiballZ_stat.py'
Feb 24 15:23:06 compute-0 sudo[50606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:06 compute-0 python3.9[50609]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:23:06 compute-0 sudo[50606]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:06 compute-0 sudo[50730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiaonklucidxzuwzbwmlofexudxgmqvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946586.0266092-277-206148224714265/AnsiballZ_copy.py'
Feb 24 15:23:06 compute-0 sudo[50730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:06 compute-0 python3.9[50733]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1771946586.0266092-277-206148224714265/.source.json _original_basename=.dvxtzled follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:23:06 compute-0 sudo[50730]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:07 compute-0 sshd-session[50275]: Invalid user ubuntu from 120.48.56.86 port 51072
Feb 24 15:23:07 compute-0 sshd-session[50275]: Received disconnect from 120.48.56.86 port 51072:11:  [preauth]
Feb 24 15:23:07 compute-0 sshd-session[50275]: Disconnected from invalid user ubuntu 120.48.56.86 port 51072 [preauth]
Feb 24 15:23:07 compute-0 sudo[50883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khhwgkywzuhkzrkkxmvzrcfyxqnznfay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946587.2752268-295-244432757550663/AnsiballZ_podman_image.py'
Feb 24 15:23:07 compute-0 sudo[50883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:07 compute-0 python3.9[50886]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 24 15:23:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1651023778-lower\x2dmapped.mount: Deactivated successfully.
Feb 24 15:23:12 compute-0 podman[50899]: 2026-02-24 15:23:12.658936559 +0000 UTC m=+4.625578235 image pull ce6781f051bf092c13d84cb587c56ad7edaa58b70fcc0effc1dff15724d5232e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 24 15:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:12 compute-0 sudo[50883]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:13 compute-0 sudo[51192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpupingpvstmpjwzehryqlkrdrgyeiwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946593.1159258-306-30166880439529/AnsiballZ_podman_image.py'
Feb 24 15:23:13 compute-0 sudo[51192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:13 compute-0 python3.9[51195]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 24 15:23:21 compute-0 podman[51208]: 2026-02-24 15:23:21.426378669 +0000 UTC m=+7.858971408 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 15:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:21 compute-0 sudo[51192]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:22 compute-0 sudo[51506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvtglldsjhqjmvexrcrenzzsouiunphv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946601.845266-316-179886503462424/AnsiballZ_podman_image.py'
Feb 24 15:23:22 compute-0 sudo[51506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:22 compute-0 python3.9[51509]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 24 15:23:28 compute-0 sshd-session[51562]: Connection closed by authenticating user root 185.156.73.233 port 21080 [preauth]
Feb 24 15:23:31 compute-0 podman[51521]: 2026-02-24 15:23:31.477931486 +0000 UTC m=+9.210805771 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 24 15:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:31 compute-0 sudo[51506]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:32 compute-0 sudo[51789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwgcwoasrlvndhbrjfccvfrcxgjgvyly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946611.9805138-327-96469236131298/AnsiballZ_podman_image.py'
Feb 24 15:23:32 compute-0 sudo[51789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:32 compute-0 python3.9[51792]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 24 15:23:47 compute-0 podman[51804]: 2026-02-24 15:23:47.218190937 +0000 UTC m=+14.756499795 image pull 2de33f14cfa6ceeafb6b935f1a9771276123e54a116dc534ba3482038b9ef693 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Feb 24 15:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:47 compute-0 sudo[51789]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:47 compute-0 sudo[52124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvzpskhqlyvuivfiiadxcmbxmxkswqrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946627.474775-327-212411752551097/AnsiballZ_podman_image.py'
Feb 24 15:23:47 compute-0 sudo[52124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:47 compute-0 python3.9[52127]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 24 15:23:49 compute-0 podman[52140]: 2026-02-24 15:23:49.071639799 +0000 UTC m=+1.126785357 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Feb 24 15:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:49 compute-0 sudo[52124]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:49 compute-0 sudo[52410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiujkgodbzlekzdoysfnebothfvoiywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946629.4711907-343-90113749768993/AnsiballZ_podman_image.py'
Feb 24 15:23:49 compute-0 sudo[52410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:49 compute-0 python3.9[52413]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 24 15:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:52 compute-0 podman[52424]: 2026-02-24 15:23:52.742393707 +0000 UTC m=+2.796891779 image pull 20914a1cbbac726a2580da2b97a9d453e7e0538b5e06ae0c9613bcea0e3e5de9 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Feb 24 15:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:52 compute-0 sudo[52410]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:53 compute-0 sudo[52679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dagvwgtbsmewynfunaqprlrfyhentybu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946633.031484-343-118215420575791/AnsiballZ_podman_image.py'
Feb 24 15:23:53 compute-0 sudo[52679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:23:53 compute-0 python3.9[52682]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 24 15:23:58 compute-0 podman[52694]: 2026-02-24 15:23:58.694468334 +0000 UTC m=+5.208157932 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Feb 24 15:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:23:58 compute-0 sudo[52679]: pam_unix(sudo:session): session closed for user root
Feb 24 15:23:59 compute-0 sshd-session[45858]: Connection closed by 192.168.122.30 port 43990
Feb 24 15:23:59 compute-0 sshd-session[45855]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:23:59 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Feb 24 15:23:59 compute-0 systemd[1]: session-10.scope: Consumed 2min 3.303s CPU time.
Feb 24 15:23:59 compute-0 systemd-logind[813]: Session 10 logged out. Waiting for processes to exit.
Feb 24 15:23:59 compute-0 systemd-logind[813]: Removed session 10.
Feb 24 15:24:04 compute-0 sshd-session[52942]: Accepted publickey for zuul from 192.168.122.30 port 35460 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:24:04 compute-0 systemd-logind[813]: New session 11 of user zuul.
Feb 24 15:24:04 compute-0 systemd[1]: Started Session 11 of User zuul.
Feb 24 15:24:04 compute-0 sshd-session[52942]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:24:05 compute-0 python3.9[53095]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:24:06 compute-0 sudo[53249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnihcdkzodfvgzoxbhrbhiasxxzednui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946645.9636984-32-32949437622349/AnsiballZ_getent.py'
Feb 24 15:24:06 compute-0 sudo[53249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:06 compute-0 python3.9[53252]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 24 15:24:06 compute-0 sudo[53249]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:07 compute-0 sudo[53403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gshsakglwqoettexhcghyubanwgrdqtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946646.6596813-40-192238787608914/AnsiballZ_group.py'
Feb 24 15:24:07 compute-0 sudo[53403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:07 compute-0 python3.9[53406]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 24 15:24:07 compute-0 groupadd[53407]: group added to /etc/group: name=openvswitch, GID=42476
Feb 24 15:24:07 compute-0 groupadd[53407]: group added to /etc/gshadow: name=openvswitch
Feb 24 15:24:07 compute-0 groupadd[53407]: new group: name=openvswitch, GID=42476
Feb 24 15:24:07 compute-0 sudo[53403]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:07 compute-0 sudo[53562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxjkiaiuftigqppkrceduefpoxhrtvlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946647.428101-48-16900041740202/AnsiballZ_user.py'
Feb 24 15:24:07 compute-0 sudo[53562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:08 compute-0 python3.9[53565]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 24 15:24:08 compute-0 useradd[53567]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/1
Feb 24 15:24:08 compute-0 useradd[53567]: add 'openvswitch' to group 'hugetlbfs'
Feb 24 15:24:08 compute-0 useradd[53567]: add 'openvswitch' to shadow group 'hugetlbfs'
Feb 24 15:24:08 compute-0 sudo[53562]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:08 compute-0 sudo[53723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udzdcjgdczaoweryzyzapryriqswntxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946648.4012275-58-146112752356547/AnsiballZ_setup.py'
Feb 24 15:24:08 compute-0 sudo[53723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:08 compute-0 python3.9[53726]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:24:09 compute-0 sudo[53723]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:09 compute-0 sudo[53808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obffkutrzkpgqdtzvuzwtugyjgswdgiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946648.4012275-58-146112752356547/AnsiballZ_dnf.py'
Feb 24 15:24:09 compute-0 sudo[53808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:09 compute-0 python3.9[53811]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:24:11 compute-0 sudo[53808]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:11 compute-0 sudo[53971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbuuugpulycbslicuzaicwcttatdktjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946651.3679976-72-160240217811208/AnsiballZ_dnf.py'
Feb 24 15:24:11 compute-0 sudo[53971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:11 compute-0 python3.9[53974]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:24:23 compute-0 kernel: SELinux:  Converting 2740 SID table entries...
Feb 24 15:24:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 24 15:24:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 24 15:24:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 24 15:24:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 24 15:24:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 24 15:24:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 24 15:24:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 24 15:24:23 compute-0 groupadd[53997]: group added to /etc/group: name=unbound, GID=994
Feb 24 15:24:23 compute-0 groupadd[53997]: group added to /etc/gshadow: name=unbound
Feb 24 15:24:23 compute-0 groupadd[53997]: new group: name=unbound, GID=994
Feb 24 15:24:23 compute-0 useradd[54004]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Feb 24 15:24:24 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb 24 15:24:24 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb 24 15:24:25 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 24 15:24:25 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 24 15:24:25 compute-0 systemd[1]: Reloading.
Feb 24 15:24:25 compute-0 systemd-rc-local-generator[54504]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:24:25 compute-0 systemd-sysv-generator[54509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:24:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 24 15:24:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 24 15:24:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 24 15:24:26 compute-0 systemd[1]: run-r7f5c252dd61d430daadb278c5a58f762.service: Deactivated successfully.
Feb 24 15:24:26 compute-0 sudo[53971]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:26 compute-0 sudo[55095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrzfslbuocmpmfnjilasgeqnjfjdjsfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946666.1693585-80-20267158141185/AnsiballZ_systemd.py'
Feb 24 15:24:26 compute-0 sudo[55095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:27 compute-0 python3.9[55098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 24 15:24:27 compute-0 systemd[1]: Reloading.
Feb 24 15:24:27 compute-0 systemd-rc-local-generator[55121]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:24:27 compute-0 systemd-sysv-generator[55125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:24:27 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Feb 24 15:24:27 compute-0 chown[55147]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb 24 15:24:27 compute-0 ovs-ctl[55152]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb 24 15:24:27 compute-0 ovs-ctl[55152]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb 24 15:24:27 compute-0 ovs-ctl[55152]: Starting ovsdb-server [  OK  ]
Feb 24 15:24:27 compute-0 ovs-vsctl[55201]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb 24 15:24:27 compute-0 ovs-vsctl[55221]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ab329b13-e5ce-43e1-b513-c55bd650f251\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb 24 15:24:27 compute-0 ovs-ctl[55152]: Configuring Open vSwitch system IDs [  OK  ]
Feb 24 15:24:27 compute-0 ovs-ctl[55152]: Enabling remote OVSDB managers [  OK  ]
Feb 24 15:24:27 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Feb 24 15:24:27 compute-0 ovs-vsctl[55227]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 24 15:24:27 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb 24 15:24:27 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb 24 15:24:27 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb 24 15:24:27 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Feb 24 15:24:27 compute-0 ovs-ctl[55271]: Inserting openvswitch module [  OK  ]
Feb 24 15:24:27 compute-0 ovs-ctl[55240]: Starting ovs-vswitchd [  OK  ]
Feb 24 15:24:27 compute-0 ovs-vsctl[55288]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 24 15:24:27 compute-0 ovs-ctl[55240]: Enabling remote OVSDB managers [  OK  ]
Feb 24 15:24:27 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb 24 15:24:27 compute-0 systemd[1]: Starting Open vSwitch...
Feb 24 15:24:27 compute-0 systemd[1]: Finished Open vSwitch.
Feb 24 15:24:27 compute-0 sudo[55095]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:28 compute-0 python3.9[55440]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:24:29 compute-0 sudo[55590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hduxyrvvmgqnmgmboupfenkocpcpbghx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946668.9216082-99-47283315713940/AnsiballZ_sefcontext.py'
Feb 24 15:24:29 compute-0 sudo[55590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:29 compute-0 python3.9[55593]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 24 15:24:30 compute-0 kernel: SELinux:  Converting 2754 SID table entries...
Feb 24 15:24:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 24 15:24:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 24 15:24:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 24 15:24:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 24 15:24:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 24 15:24:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 24 15:24:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 24 15:24:30 compute-0 sudo[55590]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:31 compute-0 python3.9[55748]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:24:32 compute-0 sudo[55904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orfrrjrhnsuqxlopzoagvhxbbkgfuazb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946671.8902915-117-192031675107091/AnsiballZ_dnf.py'
Feb 24 15:24:32 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb 24 15:24:32 compute-0 sudo[55904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:32 compute-0 python3.9[55907]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:24:33 compute-0 sudo[55904]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:33 compute-0 sudo[56060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmipsbnjywqpuwjduryzokkmfgdtkvtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946673.6246908-125-87593597630727/AnsiballZ_command.py'
Feb 24 15:24:33 compute-0 sudo[56060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:34 compute-0 python3.9[56063]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:24:34 compute-0 sudo[56060]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:35 compute-0 sudo[56348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dogcnnzjzaxyblwdsstxpqgzaoybnkhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946674.886334-133-269217898331804/AnsiballZ_file.py'
Feb 24 15:24:35 compute-0 sudo[56348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:35 compute-0 python3.9[56351]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 24 15:24:35 compute-0 sudo[56348]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:36 compute-0 sshd-session[55909]: Invalid user ubuntu from 120.48.56.86 port 54178
Feb 24 15:24:36 compute-0 python3.9[56501]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:24:36 compute-0 sshd-session[55909]: Received disconnect from 120.48.56.86 port 54178:11:  [preauth]
Feb 24 15:24:36 compute-0 sshd-session[55909]: Disconnected from invalid user ubuntu 120.48.56.86 port 54178 [preauth]
Feb 24 15:24:36 compute-0 sudo[56653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajnkyzihuxyzsoorvmtkoqjxzxmppwfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946676.3563318-149-38426707874715/AnsiballZ_dnf.py'
Feb 24 15:24:36 compute-0 sudo[56653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:36 compute-0 python3.9[56656]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:24:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 24 15:24:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 24 15:24:38 compute-0 systemd[1]: Reloading.
Feb 24 15:24:38 compute-0 systemd-rc-local-generator[56694]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:24:38 compute-0 systemd-sysv-generator[56700]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:24:38 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 24 15:24:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 24 15:24:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 24 15:24:39 compute-0 systemd[1]: run-r2a7ffb3028a14e069e1e052c9c7cd365.service: Deactivated successfully.
Feb 24 15:24:39 compute-0 sudo[56653]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:39 compute-0 sudo[56978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwsmjctlmpunxxjfmpgzmsvswyknalxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946679.3214705-157-179140846197322/AnsiballZ_systemd.py'
Feb 24 15:24:39 compute-0 sudo[56978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:39 compute-0 python3.9[56981]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:24:39 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 24 15:24:39 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Feb 24 15:24:39 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Feb 24 15:24:39 compute-0 systemd[1]: Stopping Network Manager...
Feb 24 15:24:39 compute-0 NetworkManager[7690]: <info>  [1771946679.8456] caught SIGTERM, shutting down normally.
Feb 24 15:24:39 compute-0 NetworkManager[7690]: <info>  [1771946679.8473] dhcp4 (eth0): canceled DHCP transaction
Feb 24 15:24:39 compute-0 NetworkManager[7690]: <info>  [1771946679.8473] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 24 15:24:39 compute-0 NetworkManager[7690]: <info>  [1771946679.8473] dhcp4 (eth0): state changed no lease
Feb 24 15:24:39 compute-0 NetworkManager[7690]: <info>  [1771946679.8476] manager: NetworkManager state is now CONNECTED_SITE
Feb 24 15:24:39 compute-0 NetworkManager[7690]: <info>  [1771946679.8577] exiting (success)
Feb 24 15:24:39 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 24 15:24:39 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 24 15:24:39 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 24 15:24:39 compute-0 systemd[1]: Stopped Network Manager.
Feb 24 15:24:39 compute-0 systemd[1]: NetworkManager.service: Consumed 14.405s CPU time, 4.4M memory peak, read 0B from disk, written 18.0K to disk.
Feb 24 15:24:39 compute-0 systemd[1]: Starting Network Manager...
Feb 24 15:24:39 compute-0 NetworkManager[56995]: <info>  [1771946679.9402] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:fe81dc1b-8858-484f-aedf-ceb94a58f5bc)
Feb 24 15:24:39 compute-0 NetworkManager[56995]: <info>  [1771946679.9405] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 24 15:24:39 compute-0 NetworkManager[56995]: <info>  [1771946679.9449] manager[0x55b04aa91000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 24 15:24:39 compute-0 systemd[1]: Starting Hostname Service...
Feb 24 15:24:40 compute-0 systemd[1]: Started Hostname Service.
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0071] hostname: hostname: using hostnamed
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0071] hostname: static hostname changed from (none) to "compute-0"
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0077] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0082] manager[0x55b04aa91000]: rfkill: Wi-Fi hardware radio set enabled
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0082] manager[0x55b04aa91000]: rfkill: WWAN hardware radio set enabled
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0103] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0112] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0112] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0113] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0114] manager: Networking is enabled by state file
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0116] settings: Loaded settings plugin: keyfile (internal)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0119] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0144] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0152] dhcp: init: Using DHCP client 'internal'
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0154] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0158] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0162] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0167] device (lo): Activation: starting connection 'lo' (813eb82a-ec75-4929-9eba-5e76a5ddb15b)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0171] device (eth0): carrier: link connected
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0174] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0176] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0177] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0181] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0185] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0188] device (eth1): carrier: link connected
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0193] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0197] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (af387bd7-935d-54e8-ab92-78b300c447ac) (indicated)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0197] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0201] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0206] device (eth1): Activation: starting connection 'ci-private-network' (af387bd7-935d-54e8-ab92-78b300c447ac)
Feb 24 15:24:40 compute-0 systemd[1]: Started Network Manager.
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0210] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0217] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0222] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0232] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0237] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0249] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0252] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0254] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0258] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0263] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0266] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0273] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0283] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0294] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0295] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0301] device (lo): Activation: successful, device activated.
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0307] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0310] dhcp4 (eth0): state changed new lease, address=38.102.83.46
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0312] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0316] manager: NetworkManager state is now CONNECTED_LOCAL
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0318] device (eth1): Activation: successful, device activated.
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0327] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0426] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 systemd[1]: Starting Network Manager Wait Online...
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0471] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0472] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0474] manager: NetworkManager state is now CONNECTED_SITE
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0476] device (eth0): Activation: successful, device activated.
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0481] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 24 15:24:40 compute-0 NetworkManager[56995]: <info>  [1771946680.0517] manager: startup complete
Feb 24 15:24:40 compute-0 sudo[56978]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:40 compute-0 systemd[1]: Finished Network Manager Wait Online.
Feb 24 15:24:40 compute-0 sudo[57205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogrjauxqyukrnjsuiefahrkloylervv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946680.216329-165-197196734009958/AnsiballZ_dnf.py'
Feb 24 15:24:40 compute-0 sudo[57205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:40 compute-0 python3.9[57208]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:24:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 24 15:24:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 24 15:24:45 compute-0 systemd[1]: Reloading.
Feb 24 15:24:45 compute-0 systemd-sysv-generator[57266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:24:45 compute-0 systemd-rc-local-generator[57261]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:24:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 24 15:24:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 24 15:24:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 24 15:24:46 compute-0 systemd[1]: run-r53a73255c3114781b67b02b4cd90d94b.service: Deactivated successfully.
Feb 24 15:24:46 compute-0 sudo[57205]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:47 compute-0 sudo[57684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgubqfidnlvapctavjtaxrwbfgpndtzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946686.9868886-177-182202240070154/AnsiballZ_stat.py'
Feb 24 15:24:47 compute-0 sudo[57684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:47 compute-0 python3.9[57687]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:24:47 compute-0 sudo[57684]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:47 compute-0 sudo[57837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljpuuvhbvokgqmzezilgnntwztypepz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946687.5558884-186-204546706928814/AnsiballZ_ini_file.py'
Feb 24 15:24:47 compute-0 sudo[57837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:48 compute-0 python3.9[57840]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:24:48 compute-0 sudo[57837]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:48 compute-0 sudo[57992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jspntkosrrjbpgyoyquksvgztyaquwyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946688.398902-196-111597699804198/AnsiballZ_ini_file.py'
Feb 24 15:24:48 compute-0 sudo[57992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:48 compute-0 python3.9[57995]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:24:48 compute-0 sudo[57992]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:49 compute-0 sudo[58145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stllotpnxtircxziggykftvacalzxrfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946688.85843-196-212652028683796/AnsiballZ_ini_file.py'
Feb 24 15:24:49 compute-0 sudo[58145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:49 compute-0 python3.9[58148]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:24:49 compute-0 sudo[58145]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:49 compute-0 sudo[58298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tegiabqojnnhosxreqzrwicpulrcvusr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946689.4523642-211-131699033281022/AnsiballZ_ini_file.py'
Feb 24 15:24:49 compute-0 sudo[58298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:49 compute-0 python3.9[58301]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:24:49 compute-0 sudo[58298]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:50 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 24 15:24:50 compute-0 sudo[58451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abftaczbdwlumkecrzbukxokqtzibxhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946690.066021-211-135989606389771/AnsiballZ_ini_file.py'
Feb 24 15:24:50 compute-0 sudo[58451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:50 compute-0 python3.9[58454]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:24:50 compute-0 sudo[58451]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:50 compute-0 sudo[58604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cquluzoiunfetzjnwpmiuptcumuwfzkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946690.666685-226-3533405909484/AnsiballZ_stat.py'
Feb 24 15:24:50 compute-0 sudo[58604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:51 compute-0 python3.9[58607]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:24:51 compute-0 sudo[58604]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:51 compute-0 sudo[58728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqkerciotfophcofgruoxtmlmbkmjjzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946690.666685-226-3533405909484/AnsiballZ_copy.py'
Feb 24 15:24:51 compute-0 sudo[58728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:51 compute-0 python3.9[58731]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946690.666685-226-3533405909484/.source _original_basename=.0x5jg26u follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:24:51 compute-0 sudo[58728]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:52 compute-0 sudo[58881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duclobjuvurnfhnmwkbkritppklgzasy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946691.9486725-241-111507840842190/AnsiballZ_file.py'
Feb 24 15:24:52 compute-0 sudo[58881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:52 compute-0 python3.9[58884]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:24:52 compute-0 sudo[58881]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:53 compute-0 sudo[59034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkmjtpquuymqergaqlvtfeiwsalvbstb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946692.5700474-249-129887469691940/AnsiballZ_edpm_os_net_config_mappings.py'
Feb 24 15:24:53 compute-0 sudo[59034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:53 compute-0 python3.9[59037]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb 24 15:24:53 compute-0 sudo[59034]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:53 compute-0 sudo[59187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxtasrzxyfleccyiaakelkvvymzdccdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946693.3798335-258-281100566491211/AnsiballZ_file.py'
Feb 24 15:24:53 compute-0 sudo[59187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:53 compute-0 python3.9[59190]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:24:53 compute-0 sudo[59187]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:54 compute-0 sudo[59340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvhknfprgizyhyzxidxktwpkqlyuladx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946694.1030035-268-133343166297632/AnsiballZ_stat.py'
Feb 24 15:24:54 compute-0 sudo[59340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:54 compute-0 sudo[59340]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:54 compute-0 sudo[59464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzctpmwforecnldhsgpvvvztantvbis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946694.1030035-268-133343166297632/AnsiballZ_copy.py'
Feb 24 15:24:54 compute-0 sudo[59464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:55 compute-0 sudo[59464]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:55 compute-0 sudo[59617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nujmuisssqwtfpifwecucrkelrfxqlng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946695.1824687-283-77421235726086/AnsiballZ_slurp.py'
Feb 24 15:24:55 compute-0 sudo[59617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:55 compute-0 python3.9[59620]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb 24 15:24:55 compute-0 sudo[59617]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:56 compute-0 sudo[59793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyqtyldvtwtatcobbfdyoimttnilxgmo ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946696.125918-292-150202620100884/async_wrapper.py j750684871477 300 /home/zuul/.ansible/tmp/ansible-tmp-1771946696.125918-292-150202620100884/AnsiballZ_edpm_os_net_config.py _'
Feb 24 15:24:56 compute-0 sudo[59793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:24:56 compute-0 ansible-async_wrapper.py[59796]: Invoked with j750684871477 300 /home/zuul/.ansible/tmp/ansible-tmp-1771946696.125918-292-150202620100884/AnsiballZ_edpm_os_net_config.py _
Feb 24 15:24:56 compute-0 ansible-async_wrapper.py[59799]: Starting module and watcher
Feb 24 15:24:56 compute-0 ansible-async_wrapper.py[59799]: Start watching 59800 (300)
Feb 24 15:24:56 compute-0 ansible-async_wrapper.py[59800]: Start module (59800)
Feb 24 15:24:56 compute-0 ansible-async_wrapper.py[59796]: Return async_wrapper task started.
Feb 24 15:24:56 compute-0 sudo[59793]: pam_unix(sudo:session): session closed for user root
Feb 24 15:24:57 compute-0 python3.9[59801]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True remove_config=False safe_defaults=False use_nmstate=True purge_provider=
Feb 24 15:24:57 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb 24 15:24:57 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb 24 15:24:57 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb 24 15:24:57 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb 24 15:24:57 compute-0 kernel: cfg80211: failed to load regulatory.db
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.4696] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.4709] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5096] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5098] audit: op="connection-add" uuid="f8450f38-e8f8-47a8-94e2-b11bd40c263f" name="br-ex-br" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5109] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5111] audit: op="connection-add" uuid="faa2627e-c667-451a-b7df-e7bc5b0eb4a1" name="br-ex-port" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5120] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5122] audit: op="connection-add" uuid="d1a5d58e-fd79-4e00-a336-e98c5c0a3808" name="eth1-port" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5131] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5132] audit: op="connection-add" uuid="f0e65860-6d4c-411a-a24d-237fbef76b4e" name="vlan20-port" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5142] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5143] audit: op="connection-add" uuid="2151336b-1c0a-4e02-a942-ed8e2291e3b4" name="vlan21-port" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5152] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5153] audit: op="connection-add" uuid="bb0accb1-277c-4f7d-adba-44854188f6c6" name="vlan22-port" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5167] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5180] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5182] audit: op="connection-add" uuid="76ddb831-d9c3-4a71-a013-067927c053a0" name="br-ex-if" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5234] audit: op="connection-update" uuid="af387bd7-935d-54e8-ab92-78b300c447ac" name="ci-private-network" args="ovs-interface.type,connection.timestamp,connection.port-type,connection.slave-type,connection.controller,connection.master,ipv6.dns,ipv6.routes,ipv6.routing-rules,ipv6.method,ipv6.addresses,ipv6.addr-gen-mode,ovs-external-ids.data,ipv4.method,ipv4.never-default,ipv4.routing-rules,ipv4.routes,ipv4.addresses,ipv4.dns" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5247] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5249] audit: op="connection-add" uuid="bc105c3f-4ae0-4be8-a23f-6135f626b60b" name="vlan20-if" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5261] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5263] audit: op="connection-add" uuid="9276c433-533f-45ff-9428-6ef477fc4b52" name="vlan21-if" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5280] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5281] audit: op="connection-add" uuid="7da8f9ab-0fd6-453b-a6cf-a60e4afbcc30" name="vlan22-if" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5289] audit: op="connection-delete" uuid="36860a5a-6ecf-323d-93d4-bc747fb83ad9" name="Wired connection 1" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5297] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5298] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5303] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5305] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f8450f38-e8f8-47a8-94e2-b11bd40c263f)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5306] audit: op="connection-activate" uuid="f8450f38-e8f8-47a8-94e2-b11bd40c263f" name="br-ex-br" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5307] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5307] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5311] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5313] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (faa2627e-c667-451a-b7df-e7bc5b0eb4a1)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5314] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5315] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5317] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5320] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d1a5d58e-fd79-4e00-a336-e98c5c0a3808)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5321] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5321] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5324] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5327] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f0e65860-6d4c-411a-a24d-237fbef76b4e)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5328] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5329] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5332] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5334] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2151336b-1c0a-4e02-a942-ed8e2291e3b4)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5336] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5336] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5339] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5342] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (bb0accb1-277c-4f7d-adba-44854188f6c6)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5343] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5344] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5346] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5349] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5350] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5352] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5354] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (76ddb831-d9c3-4a71-a013-067927c053a0)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5355] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5357] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5358] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5359] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5360] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5367] device (eth1): disconnecting for new activation request.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5367] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5369] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5371] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5372] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5375] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5375] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5378] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5382] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (bc105c3f-4ae0-4be8-a23f-6135f626b60b)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5383] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5386] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5387] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5388] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5391] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5392] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5395] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5398] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (9276c433-533f-45ff-9428-6ef477fc4b52)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5399] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5401] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5403] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5404] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5406] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <warn>  [1771946698.5407] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5409] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5413] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (7da8f9ab-0fd6-453b-a6cf-a60e4afbcc30)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5414] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5415] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5417] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5417] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5418] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5427] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5428] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5431] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5432] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5437] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5439] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5442] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5444] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5445] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 kernel: ovs-system: entered promiscuous mode
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5459] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5466] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5469] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5473] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5477] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 kernel: Timeout policy base is empty
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5481] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5484] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5486] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 systemd-udevd[59806]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5490] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5494] dhcp4 (eth0): canceled DHCP transaction
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5494] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5494] dhcp4 (eth0): state changed no lease
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5495] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5503] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5506] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59802 uid=0 result="fail" reason="Device is not activated"
Feb 24 15:24:58 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5573] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5578] dhcp4 (eth0): state changed new lease, address=38.102.83.46
Feb 24 15:24:58 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5621] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5633] device (eth1): disconnecting for new activation request.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5634] audit: op="connection-activate" uuid="af387bd7-935d-54e8-ab92-78b300c447ac" name="ci-private-network" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5636] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 24 15:24:58 compute-0 kernel: br-ex: entered promiscuous mode
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5749] device (eth1): Activation: starting connection 'ci-private-network' (af387bd7-935d-54e8-ab92-78b300c447ac)
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5757] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb 24 15:24:58 compute-0 kernel: vlan22: entered promiscuous mode
Feb 24 15:24:58 compute-0 systemd-udevd[59808]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5788] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5791] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 kernel: vlan20: entered promiscuous mode
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5804] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5805] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59802 uid=0 result="success"
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5805] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 systemd-udevd[59807]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5806] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5807] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5808] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5809] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5818] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5824] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5828] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5833] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5836] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5839] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5842] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5846] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5850] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5854] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5857] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5862] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5865] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5872] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5880] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5888] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb 24 15:24:58 compute-0 kernel: vlan21: entered promiscuous mode
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5891] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5893] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5919] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5923] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5930] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5938] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5948] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5954] device (eth1): Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5961] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5963] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5964] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5965] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5970] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5975] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5980] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5986] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5989] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.5999] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.6010] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.6055] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.6056] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 24 15:24:58 compute-0 NetworkManager[56995]: <info>  [1771946698.6062] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 24 15:24:59 compute-0 NetworkManager[56995]: <info>  [1771946699.7333] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59802 uid=0 result="success"
Feb 24 15:24:59 compute-0 NetworkManager[56995]: <info>  [1771946699.8746] checkpoint[0x55b04aa67950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb 24 15:24:59 compute-0 NetworkManager[56995]: <info>  [1771946699.8750] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59802 uid=0 result="success"
Feb 24 15:25:00 compute-0 NetworkManager[56995]: <info>  [1771946700.0930] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59802 uid=0 result="success"
Feb 24 15:25:00 compute-0 NetworkManager[56995]: <info>  [1771946700.0939] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59802 uid=0 result="success"
Feb 24 15:25:00 compute-0 NetworkManager[56995]: <info>  [1771946700.2233] audit: op="networking-control" arg="global-dns-configuration" pid=59802 uid=0 result="success"
Feb 24 15:25:00 compute-0 NetworkManager[56995]: <info>  [1771946700.2257] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb 24 15:25:00 compute-0 NetworkManager[56995]: <info>  [1771946700.2280] audit: op="networking-control" arg="global-dns-configuration" pid=59802 uid=0 result="success"
Feb 24 15:25:00 compute-0 NetworkManager[56995]: <info>  [1771946700.2296] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59802 uid=0 result="success"
Feb 24 15:25:00 compute-0 NetworkManager[56995]: <info>  [1771946700.3555] checkpoint[0x55b04aa67a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb 24 15:25:00 compute-0 NetworkManager[56995]: <info>  [1771946700.3560] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59802 uid=0 result="success"
Feb 24 15:25:00 compute-0 ansible-async_wrapper.py[59800]: Module complete (59800)
Feb 24 15:25:00 compute-0 sudo[60139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpejicvnqunfnssbkqigdzwcoygmfnra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946700.3130882-292-109102435647590/AnsiballZ_async_status.py'
Feb 24 15:25:00 compute-0 sudo[60139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:00 compute-0 python3.9[60142]: ansible-ansible.legacy.async_status Invoked with jid=j750684871477.59796 mode=status _async_dir=/root/.ansible_async
Feb 24 15:25:00 compute-0 sudo[60139]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:01 compute-0 sudo[60240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzvozkvkaspesfdrgdyezjbmzybelszw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946700.3130882-292-109102435647590/AnsiballZ_async_status.py'
Feb 24 15:25:01 compute-0 sudo[60240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:01 compute-0 python3.9[60243]: ansible-ansible.legacy.async_status Invoked with jid=j750684871477.59796 mode=cleanup _async_dir=/root/.ansible_async
Feb 24 15:25:01 compute-0 sudo[60240]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:01 compute-0 sudo[60393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehalfxkjttfutkdilchuhgczrqlbjjvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946701.6492403-314-255384250898177/AnsiballZ_stat.py'
Feb 24 15:25:01 compute-0 sudo[60393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:01 compute-0 ansible-async_wrapper.py[59799]: Done in kid B.
Feb 24 15:25:01 compute-0 python3.9[60396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:25:02 compute-0 sudo[60393]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:02 compute-0 sudo[60517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sopyuumcvasiowboskoovdeldvjemrrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946701.6492403-314-255384250898177/AnsiballZ_copy.py'
Feb 24 15:25:02 compute-0 sudo[60517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:02 compute-0 python3.9[60520]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946701.6492403-314-255384250898177/.source.returncode _original_basename=.dvue3wdm follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:25:02 compute-0 sudo[60517]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:02 compute-0 sudo[60670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utqbpfiijirassocjyxtolgvfnvecwgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946702.6344178-330-251335798017399/AnsiballZ_stat.py'
Feb 24 15:25:02 compute-0 sudo[60670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:03 compute-0 python3.9[60673]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:25:03 compute-0 sudo[60670]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:03 compute-0 sudo[60794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuyruckhnpsrbpcpassdfhmnhxlexbct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946702.6344178-330-251335798017399/AnsiballZ_copy.py'
Feb 24 15:25:03 compute-0 sudo[60794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:03 compute-0 python3.9[60797]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946702.6344178-330-251335798017399/.source.cfg _original_basename=.tw5owwf_ follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:25:03 compute-0 sudo[60794]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:03 compute-0 sudo[60947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfptapzvpnfqpgtzyojknvfaeoyinqts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946703.648252-345-229794606817109/AnsiballZ_systemd.py'
Feb 24 15:25:03 compute-0 sudo[60947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:04 compute-0 python3.9[60950]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:25:04 compute-0 systemd[1]: Reloading Network Manager...
Feb 24 15:25:04 compute-0 NetworkManager[56995]: <info>  [1771946704.3703] audit: op="reload" arg="0" pid=60955 uid=0 result="success"
Feb 24 15:25:04 compute-0 NetworkManager[56995]: <info>  [1771946704.3713] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb 24 15:25:04 compute-0 systemd[1]: Reloaded Network Manager.
Feb 24 15:25:04 compute-0 sudo[60947]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:04 compute-0 sshd-session[52945]: Connection closed by 192.168.122.30 port 35460
Feb 24 15:25:04 compute-0 sshd-session[52942]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:25:04 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Feb 24 15:25:04 compute-0 systemd[1]: session-11.scope: Consumed 41.995s CPU time.
Feb 24 15:25:04 compute-0 systemd-logind[813]: Session 11 logged out. Waiting for processes to exit.
Feb 24 15:25:04 compute-0 systemd-logind[813]: Removed session 11.
Feb 24 15:25:10 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 24 15:25:10 compute-0 sshd-session[60987]: Accepted publickey for zuul from 192.168.122.30 port 55564 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:25:10 compute-0 systemd-logind[813]: New session 12 of user zuul.
Feb 24 15:25:10 compute-0 systemd[1]: Started Session 12 of User zuul.
Feb 24 15:25:10 compute-0 sshd-session[60987]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:25:11 compute-0 python3.9[61141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:25:12 compute-0 python3.9[61295]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:25:13 compute-0 python3.9[61484]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:25:13 compute-0 sshd-session[60990]: Connection closed by 192.168.122.30 port 55564
Feb 24 15:25:13 compute-0 sshd-session[60987]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:25:13 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Feb 24 15:25:13 compute-0 systemd[1]: session-12.scope: Consumed 1.972s CPU time.
Feb 24 15:25:13 compute-0 systemd-logind[813]: Session 12 logged out. Waiting for processes to exit.
Feb 24 15:25:13 compute-0 systemd-logind[813]: Removed session 12.
Feb 24 15:25:14 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 24 15:25:19 compute-0 sshd-session[61513]: Accepted publickey for zuul from 192.168.122.30 port 40536 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:25:19 compute-0 systemd-logind[813]: New session 13 of user zuul.
Feb 24 15:25:19 compute-0 systemd[1]: Started Session 13 of User zuul.
Feb 24 15:25:19 compute-0 sshd-session[61513]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:25:20 compute-0 python3.9[61667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:25:21 compute-0 python3.9[61821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:25:21 compute-0 sudo[61975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsweafpdnimfjfxcvrscoasldygvlouy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946721.6810737-35-73650701316843/AnsiballZ_setup.py'
Feb 24 15:25:21 compute-0 sudo[61975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:22 compute-0 python3.9[61978]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:25:22 compute-0 sudo[61975]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:22 compute-0 sudo[62060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twbpqrgalksccuzohmwxggytleqtnsxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946721.6810737-35-73650701316843/AnsiballZ_dnf.py'
Feb 24 15:25:22 compute-0 sudo[62060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:23 compute-0 python3.9[62063]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:25:24 compute-0 sudo[62060]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:24 compute-0 sudo[62215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drsmpqkndpwdpzsicfomryrnrnpksqly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946724.4580448-47-119777864898776/AnsiballZ_setup.py'
Feb 24 15:25:24 compute-0 sudo[62215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:25 compute-0 python3.9[62218]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:25:25 compute-0 sudo[62215]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:25 compute-0 sudo[62407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhyovhobqkojmbuqxjdzbdsbdyuwwngb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946725.4547458-58-162003171547048/AnsiballZ_file.py'
Feb 24 15:25:25 compute-0 sudo[62407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:26 compute-0 python3.9[62410]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:25:26 compute-0 sudo[62407]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:26 compute-0 sudo[62561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuotebvkmirkwrogamrrwvzpixpzalul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946726.2632525-66-278368202156171/AnsiballZ_command.py'
Feb 24 15:25:26 compute-0 sudo[62561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:26 compute-0 python3.9[62564]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:25:26 compute-0 sudo[62561]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:27 compute-0 sudo[62725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogkromxkporckhwketfiuzwmqcpwkhfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946727.1428716-74-162953517102059/AnsiballZ_stat.py'
Feb 24 15:25:27 compute-0 sudo[62725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:27 compute-0 python3.9[62728]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:25:27 compute-0 sudo[62725]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:27 compute-0 sudo[62804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-begwfnmcwqphbfogvhmvbddillqofwnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946727.1428716-74-162953517102059/AnsiballZ_file.py'
Feb 24 15:25:27 compute-0 sudo[62804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:28 compute-0 python3.9[62807]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:25:28 compute-0 sudo[62804]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:28 compute-0 sudo[62957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gluqxiisovimyhbibzjhdryhsyxxwizs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946728.3221653-86-234413329098278/AnsiballZ_stat.py'
Feb 24 15:25:28 compute-0 sudo[62957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:28 compute-0 python3.9[62960]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:25:28 compute-0 sudo[62957]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:28 compute-0 sudo[63036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdsartuebqhxxfhtrjbzaphxmvykvjvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946728.3221653-86-234413329098278/AnsiballZ_file.py'
Feb 24 15:25:28 compute-0 sudo[63036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:29 compute-0 python3.9[63039]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:25:29 compute-0 sudo[63036]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:29 compute-0 sudo[63190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myjcvnevatxqtheafghbniqmjvzhsqym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946729.3512223-99-199173421195591/AnsiballZ_ini_file.py'
Feb 24 15:25:29 compute-0 sudo[63190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:29 compute-0 python3.9[63193]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:25:30 compute-0 sudo[63190]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:30 compute-0 sudo[63343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkswulybrzskfdxvshypvjrhuqpfmedn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946730.1120045-99-79128923202353/AnsiballZ_ini_file.py'
Feb 24 15:25:30 compute-0 sudo[63343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:30 compute-0 python3.9[63346]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:25:30 compute-0 sudo[63343]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:30 compute-0 sudo[63496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgkigcpstmbbdvnjyjhljsklbploffyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946730.6656811-99-5209221661634/AnsiballZ_ini_file.py'
Feb 24 15:25:30 compute-0 sudo[63496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:31 compute-0 python3.9[63499]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:25:31 compute-0 sudo[63496]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:31 compute-0 sudo[63649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjfuxngfcbgrzrbtxsjzyakqfzjukedk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946731.2130055-99-230930343694616/AnsiballZ_ini_file.py'
Feb 24 15:25:31 compute-0 sudo[63649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:31 compute-0 python3.9[63652]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:25:31 compute-0 sudo[63649]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:32 compute-0 sudo[63802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzgnwulmvmoizvdtavveocucdauxgieu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946731.8835788-130-267655996437969/AnsiballZ_dnf.py'
Feb 24 15:25:32 compute-0 sudo[63802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:32 compute-0 python3.9[63805]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:25:33 compute-0 sudo[63802]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:33 compute-0 sudo[63956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elvudzelqqcmvbhgkxiqggdihfzmodfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946733.747706-141-52852284009773/AnsiballZ_setup.py'
Feb 24 15:25:33 compute-0 sudo[63956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:34 compute-0 python3.9[63959]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:25:34 compute-0 sudo[63956]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:34 compute-0 sudo[64111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cebvckwqkorojrccmwqtotxhapgbocpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946734.503383-149-19750875660492/AnsiballZ_stat.py'
Feb 24 15:25:34 compute-0 sudo[64111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:34 compute-0 python3.9[64114]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:25:34 compute-0 sudo[64111]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:35 compute-0 sudo[64264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhzunabxcurdvisavfftipwublpasnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946735.130761-158-50637439666139/AnsiballZ_stat.py'
Feb 24 15:25:35 compute-0 sudo[64264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:35 compute-0 python3.9[64267]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:25:35 compute-0 sudo[64264]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:36 compute-0 sudo[64417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pflomuvimowndbhfbqilayqxaexmmryx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946735.8180933-168-144750738428294/AnsiballZ_command.py'
Feb 24 15:25:36 compute-0 sudo[64417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:36 compute-0 python3.9[64420]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:25:36 compute-0 sudo[64417]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:36 compute-0 sudo[64571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgntlyeshxoyhrvjjbmoiknlutcozbcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946736.496411-178-135103327604482/AnsiballZ_service_facts.py'
Feb 24 15:25:36 compute-0 sudo[64571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:37 compute-0 python3.9[64574]: ansible-service_facts Invoked
Feb 24 15:25:37 compute-0 network[64591]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 24 15:25:37 compute-0 network[64592]: 'network-scripts' will be removed from distribution in near future.
Feb 24 15:25:37 compute-0 network[64593]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 24 15:25:39 compute-0 sudo[64571]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:40 compute-0 sudo[64877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvovwgnrdswtlgnxqdbepzdqotszphpw ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1771946740.0767558-193-121662125989157/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1771946740.0767558-193-121662125989157/args'
Feb 24 15:25:40 compute-0 sudo[64877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:40 compute-0 sudo[64877]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:41 compute-0 sudo[65045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydjzgxkhxnkwlsqebrcqweswuibjesq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946740.7698977-204-182589889905526/AnsiballZ_dnf.py'
Feb 24 15:25:41 compute-0 sudo[65045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:41 compute-0 python3.9[65048]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:25:42 compute-0 sudo[65045]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:43 compute-0 sudo[65199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrnyriiysogwnzwoyljirpjsvapapqpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946742.7808363-217-108713069985876/AnsiballZ_package_facts.py'
Feb 24 15:25:43 compute-0 sudo[65199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:43 compute-0 python3.9[65202]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 24 15:25:43 compute-0 sudo[65199]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:44 compute-0 sudo[65352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtjkjuvlodtvialddzfmkpbeyinsrpod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946744.2556546-227-50964556186348/AnsiballZ_stat.py'
Feb 24 15:25:44 compute-0 sudo[65352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:44 compute-0 python3.9[65355]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:25:44 compute-0 sudo[65352]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:45 compute-0 sudo[65478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrwavcdavgmwyhtokjsonfrscogcqypm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946744.2556546-227-50964556186348/AnsiballZ_copy.py'
Feb 24 15:25:45 compute-0 sudo[65478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:45 compute-0 python3.9[65481]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946744.2556546-227-50964556186348/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:25:45 compute-0 sudo[65478]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:45 compute-0 sudo[65633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwoaegijmzuwakyhvvwcawynxsojhxxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946745.5767872-242-235771678709371/AnsiballZ_stat.py'
Feb 24 15:25:45 compute-0 sudo[65633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:46 compute-0 python3.9[65636]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:25:46 compute-0 sudo[65633]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:46 compute-0 sudo[65759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkqcgwwwikzkqkpnyczdvvpskzpnfreg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946745.5767872-242-235771678709371/AnsiballZ_copy.py'
Feb 24 15:25:46 compute-0 sudo[65759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:46 compute-0 python3.9[65762]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946745.5767872-242-235771678709371/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:25:46 compute-0 sudo[65759]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:47 compute-0 sudo[65914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkrrfkwibvngtzpvqbzuigctmdvyxikr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946746.9301639-263-71029723954082/AnsiballZ_lineinfile.py'
Feb 24 15:25:47 compute-0 sudo[65914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:47 compute-0 python3.9[65917]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:25:47 compute-0 sudo[65914]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:48 compute-0 sudo[66069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhoozpemsqhoxaavktcsfxhfemfujrnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946748.0844314-278-270211539866974/AnsiballZ_setup.py'
Feb 24 15:25:48 compute-0 sudo[66069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:48 compute-0 python3.9[66072]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:25:48 compute-0 sudo[66069]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:49 compute-0 sudo[66154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hosulhyhdaathfeufhpanyvnavhtlegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946748.0844314-278-270211539866974/AnsiballZ_systemd.py'
Feb 24 15:25:49 compute-0 sudo[66154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:49 compute-0 python3.9[66157]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:25:49 compute-0 sudo[66154]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:50 compute-0 sudo[66309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqvkhpgsvphnmtfvsdugtxwtvuivetjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946750.2298963-294-249814443851071/AnsiballZ_setup.py'
Feb 24 15:25:50 compute-0 sudo[66309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:50 compute-0 python3.9[66312]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:25:50 compute-0 sudo[66309]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:51 compute-0 sudo[66394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqbjgtuengvgbncuijvlomgkdmtankgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946750.2298963-294-249814443851071/AnsiballZ_systemd.py'
Feb 24 15:25:51 compute-0 sudo[66394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:25:51 compute-0 python3.9[66397]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:25:51 compute-0 chronyd[810]: chronyd exiting
Feb 24 15:25:51 compute-0 systemd[1]: Stopping NTP client/server...
Feb 24 15:25:51 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Feb 24 15:25:51 compute-0 systemd[1]: Stopped NTP client/server.
Feb 24 15:25:51 compute-0 systemd[1]: Starting NTP client/server...
Feb 24 15:25:51 compute-0 chronyd[66406]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 24 15:25:51 compute-0 chronyd[66406]: Frequency -26.391 +/- 0.172 ppm read from /var/lib/chrony/drift
Feb 24 15:25:51 compute-0 chronyd[66406]: Loaded seccomp filter (level 2)
Feb 24 15:25:51 compute-0 systemd[1]: Started NTP client/server.
Feb 24 15:25:51 compute-0 sudo[66394]: pam_unix(sudo:session): session closed for user root
Feb 24 15:25:51 compute-0 sshd-session[61516]: Connection closed by 192.168.122.30 port 40536
Feb 24 15:25:51 compute-0 sshd-session[61513]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:25:51 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Feb 24 15:25:51 compute-0 systemd[1]: session-13.scope: Consumed 21.906s CPU time.
Feb 24 15:25:51 compute-0 systemd-logind[813]: Session 13 logged out. Waiting for processes to exit.
Feb 24 15:25:51 compute-0 systemd-logind[813]: Removed session 13.
Feb 24 15:25:58 compute-0 sshd-session[66432]: Accepted publickey for zuul from 192.168.122.30 port 49244 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:25:58 compute-0 systemd-logind[813]: New session 14 of user zuul.
Feb 24 15:25:58 compute-0 systemd[1]: Started Session 14 of User zuul.
Feb 24 15:25:58 compute-0 sshd-session[66432]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:25:59 compute-0 python3.9[66585]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:25:59 compute-0 sudo[66739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywevvpgzejnalyisjgjaublrdpuaipsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946759.5856438-28-248921550996596/AnsiballZ_file.py'
Feb 24 15:25:59 compute-0 sudo[66739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:00 compute-0 python3.9[66742]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:00 compute-0 sudo[66739]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:00 compute-0 sudo[66915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmhqkqjhyhjbfmhelljvmqhiqezvoahu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946760.3062725-36-250231105697549/AnsiballZ_stat.py'
Feb 24 15:26:00 compute-0 sudo[66915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:00 compute-0 python3.9[66918]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:00 compute-0 sudo[66915]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:01 compute-0 sudo[66994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rddzbnfiimwvvkdmibkvjtzsttqdukfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946760.3062725-36-250231105697549/AnsiballZ_file.py'
Feb 24 15:26:01 compute-0 sudo[66994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:01 compute-0 python3.9[66997]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.b0cojd2r recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:01 compute-0 sudo[66994]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:01 compute-0 sudo[67147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fikvrenkiivadvvojldyumlwglqqfpyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946761.6132474-56-27143461869308/AnsiballZ_stat.py'
Feb 24 15:26:01 compute-0 sudo[67147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:02 compute-0 python3.9[67150]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:02 compute-0 sudo[67147]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:02 compute-0 sudo[67271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fszwlpikpagqxjgbtnewapupuixyxwaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946761.6132474-56-27143461869308/AnsiballZ_copy.py'
Feb 24 15:26:02 compute-0 sudo[67271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:02 compute-0 python3.9[67274]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946761.6132474-56-27143461869308/.source _original_basename=.6w6hl5x9 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:02 compute-0 sudo[67271]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:03 compute-0 sudo[67424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anupbeutuiwihuwxgtfcvdbgeafqihnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946762.8400204-72-104864230542195/AnsiballZ_file.py'
Feb 24 15:26:03 compute-0 sudo[67424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:03 compute-0 python3.9[67427]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:26:03 compute-0 sudo[67424]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:03 compute-0 sudo[67579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrpkdoofifwmedowjnlltendmjwzfjmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946763.4411545-80-221326735498203/AnsiballZ_stat.py'
Feb 24 15:26:03 compute-0 sudo[67579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:03 compute-0 python3.9[67582]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:03 compute-0 sudo[67579]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:04 compute-0 sudo[67703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eitokwnmdsyzmppvgcnzsbtdyedarcgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946763.4411545-80-221326735498203/AnsiballZ_copy.py'
Feb 24 15:26:04 compute-0 sudo[67703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:04 compute-0 python3.9[67706]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946763.4411545-80-221326735498203/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:26:04 compute-0 sudo[67703]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:04 compute-0 sudo[67856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfvithekpnsqyagekrvjcryqakpcrpdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946764.406185-80-236910759147659/AnsiballZ_stat.py'
Feb 24 15:26:04 compute-0 sudo[67856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:04 compute-0 python3.9[67859]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:04 compute-0 sudo[67856]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:05 compute-0 sudo[67980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xutfsguyyyuirrnjksqympvniqfxvrld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946764.406185-80-236910759147659/AnsiballZ_copy.py'
Feb 24 15:26:05 compute-0 sudo[67980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:05 compute-0 sshd-session[67452]: Invalid user ubuntu from 120.48.56.86 port 53942
Feb 24 15:26:05 compute-0 python3.9[67983]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946764.406185-80-236910759147659/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:26:05 compute-0 sudo[67980]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:05 compute-0 sshd-session[67452]: Received disconnect from 120.48.56.86 port 53942:11:  [preauth]
Feb 24 15:26:05 compute-0 sshd-session[67452]: Disconnected from invalid user ubuntu 120.48.56.86 port 53942 [preauth]
Feb 24 15:26:05 compute-0 sudo[68133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykafrhpxmsrmkjygmmbcigcxldrcsobi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946765.3947275-109-237061499733017/AnsiballZ_file.py'
Feb 24 15:26:05 compute-0 sudo[68133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:05 compute-0 python3.9[68136]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:05 compute-0 sudo[68133]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:06 compute-0 sudo[68286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewujngdcsgrbzbsqdqgvtlbktaozymty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946765.8862884-117-19453357138745/AnsiballZ_stat.py'
Feb 24 15:26:06 compute-0 sudo[68286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:06 compute-0 python3.9[68289]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:06 compute-0 sudo[68286]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:06 compute-0 sudo[68410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzvftmhrdeaphcpyhtsbtrorafklqfjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946765.8862884-117-19453357138745/AnsiballZ_copy.py'
Feb 24 15:26:06 compute-0 sudo[68410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:06 compute-0 python3.9[68413]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946765.8862884-117-19453357138745/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:06 compute-0 sudo[68410]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:07 compute-0 sudo[68563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luykcdpowggkknlogyulvwcfdcdxvivn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946767.0689483-132-251868331652263/AnsiballZ_stat.py'
Feb 24 15:26:07 compute-0 sudo[68563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:07 compute-0 python3.9[68566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:07 compute-0 sudo[68563]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:07 compute-0 sudo[68687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqzfrgrxextdbineqdkgrcgmytcawvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946767.0689483-132-251868331652263/AnsiballZ_copy.py'
Feb 24 15:26:07 compute-0 sudo[68687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:08 compute-0 python3.9[68690]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946767.0689483-132-251868331652263/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:08 compute-0 sudo[68687]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:08 compute-0 sudo[68840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yivholbpbmmdrisvdbalnwjcvlykwdiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946768.2036455-147-173581758821398/AnsiballZ_systemd.py'
Feb 24 15:26:08 compute-0 sudo[68840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:09 compute-0 python3.9[68843]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:26:09 compute-0 systemd[1]: Reloading.
Feb 24 15:26:09 compute-0 systemd-rc-local-generator[68865]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:26:09 compute-0 systemd-sysv-generator[68873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:26:09 compute-0 systemd[1]: Reloading.
Feb 24 15:26:09 compute-0 systemd-sysv-generator[68914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:26:09 compute-0 systemd-rc-local-generator[68908]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:26:09 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Feb 24 15:26:09 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Feb 24 15:26:09 compute-0 sudo[68840]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:09 compute-0 sudo[69083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjeowzjjeuiblepnkewqailwiuysjjim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946769.5521662-155-81978927109345/AnsiballZ_stat.py'
Feb 24 15:26:09 compute-0 sudo[69083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:09 compute-0 python3.9[69086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:09 compute-0 sudo[69083]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:10 compute-0 sudo[69207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygsogqgnbrnzunzaqrmzafsomxnuofxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946769.5521662-155-81978927109345/AnsiballZ_copy.py'
Feb 24 15:26:10 compute-0 sudo[69207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:10 compute-0 python3.9[69210]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946769.5521662-155-81978927109345/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:10 compute-0 sudo[69207]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:10 compute-0 sudo[69360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsefmtiyqqnonhufyymtvxbndfyelhzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946770.5121562-170-71346721276467/AnsiballZ_stat.py'
Feb 24 15:26:10 compute-0 sudo[69360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:10 compute-0 python3.9[69363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:10 compute-0 sudo[69360]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:11 compute-0 sudo[69484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjellocftwnjjdmnawunsfunkgllecml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946770.5121562-170-71346721276467/AnsiballZ_copy.py'
Feb 24 15:26:11 compute-0 sudo[69484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:11 compute-0 python3.9[69487]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946770.5121562-170-71346721276467/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:11 compute-0 sudo[69484]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:11 compute-0 sudo[69637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agdghwzqkzyiifuxttdhoyhdtlwzbycy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946771.6375635-185-230066149239212/AnsiballZ_systemd.py'
Feb 24 15:26:11 compute-0 sudo[69637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:12 compute-0 python3.9[69640]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:26:12 compute-0 systemd[1]: Reloading.
Feb 24 15:26:12 compute-0 systemd-rc-local-generator[69672]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:26:12 compute-0 systemd-sysv-generator[69675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:26:12 compute-0 systemd[1]: Reloading.
Feb 24 15:26:12 compute-0 systemd-rc-local-generator[69715]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:26:12 compute-0 systemd-sysv-generator[69718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:26:12 compute-0 systemd[1]: Starting Create netns directory...
Feb 24 15:26:12 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 24 15:26:12 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 24 15:26:12 compute-0 systemd[1]: Finished Create netns directory.
Feb 24 15:26:12 compute-0 sudo[69637]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:13 compute-0 python3.9[69883]: ansible-ansible.builtin.service_facts Invoked
Feb 24 15:26:13 compute-0 network[69900]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 24 15:26:13 compute-0 network[69901]: 'network-scripts' will be removed from distribution in near future.
Feb 24 15:26:13 compute-0 network[69902]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 24 15:26:15 compute-0 sudo[70163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djegrlzlyvwximkqwbdufoknzchqkhys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946775.2491279-201-208749552699496/AnsiballZ_systemd.py'
Feb 24 15:26:15 compute-0 sudo[70163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:15 compute-0 python3.9[70166]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:26:15 compute-0 systemd[1]: Reloading.
Feb 24 15:26:15 compute-0 systemd-sysv-generator[70201]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:26:15 compute-0 systemd-rc-local-generator[70198]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:26:15 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Feb 24 15:26:16 compute-0 iptables.init[70214]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb 24 15:26:16 compute-0 iptables.init[70214]: iptables: Flushing firewall rules: [  OK  ]
Feb 24 15:26:16 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Feb 24 15:26:16 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Feb 24 15:26:16 compute-0 sudo[70163]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:16 compute-0 sudo[70408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blrlcnvzdjkreoqqwrhhkxmofjnqmeuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946776.3203704-201-107681828498736/AnsiballZ_systemd.py'
Feb 24 15:26:16 compute-0 sudo[70408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:16 compute-0 python3.9[70411]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:26:16 compute-0 sudo[70408]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:17 compute-0 sudo[70563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwmxbffakzktbixshhwyvjifylmrsxbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946777.1110537-217-37293010198191/AnsiballZ_systemd.py'
Feb 24 15:26:17 compute-0 sudo[70563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:17 compute-0 python3.9[70566]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:26:17 compute-0 systemd[1]: Reloading.
Feb 24 15:26:17 compute-0 systemd-rc-local-generator[70592]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:26:17 compute-0 systemd-sysv-generator[70596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:26:17 compute-0 systemd[1]: Starting Netfilter Tables...
Feb 24 15:26:17 compute-0 systemd[1]: Finished Netfilter Tables.
Feb 24 15:26:17 compute-0 sudo[70563]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:18 compute-0 sudo[70763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdfzvfpozvvjyjhwznaieqknwlgkqvvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946778.1522272-225-177249947250116/AnsiballZ_command.py'
Feb 24 15:26:18 compute-0 sudo[70763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:18 compute-0 python3.9[70766]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:26:18 compute-0 sudo[70763]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:19 compute-0 sudo[70917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pljnlkcyvtamkcxolqcfjqlicfqletwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946779.427978-239-53116453458095/AnsiballZ_stat.py'
Feb 24 15:26:19 compute-0 sudo[70917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:19 compute-0 python3.9[70920]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:19 compute-0 sudo[70917]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:20 compute-0 sudo[71043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emxeodkshfbdaklssexjdpniudmxshfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946779.427978-239-53116453458095/AnsiballZ_copy.py'
Feb 24 15:26:20 compute-0 sudo[71043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:20 compute-0 python3.9[71046]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946779.427978-239-53116453458095/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:20 compute-0 sudo[71043]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:20 compute-0 sudo[71197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyregcziwtpupgkagqpsfjmsewtrgjon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946780.5052218-254-198799984864246/AnsiballZ_systemd.py'
Feb 24 15:26:20 compute-0 sudo[71197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:21 compute-0 python3.9[71200]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:26:21 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Feb 24 15:26:21 compute-0 sshd[1019]: Received SIGHUP; restarting.
Feb 24 15:26:21 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Feb 24 15:26:21 compute-0 sshd[1019]: Server listening on 0.0.0.0 port 22.
Feb 24 15:26:21 compute-0 sshd[1019]: Server listening on :: port 22.
Feb 24 15:26:21 compute-0 sudo[71197]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:21 compute-0 sudo[71354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-finoxqvkgibvtyxzcwozesicsznmddmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946781.227141-262-198770874297177/AnsiballZ_file.py'
Feb 24 15:26:21 compute-0 sudo[71354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:21 compute-0 python3.9[71357]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:21 compute-0 sudo[71354]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:21 compute-0 sudo[71507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehnzowzujxpjgfwwuzayqyhxoyyrjmgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946781.7752125-270-180834281442443/AnsiballZ_stat.py'
Feb 24 15:26:21 compute-0 sudo[71507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:22 compute-0 python3.9[71510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:22 compute-0 sudo[71507]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:22 compute-0 sudo[71631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdodwkltqlivaigzpskrbsbebrjentnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946781.7752125-270-180834281442443/AnsiballZ_copy.py'
Feb 24 15:26:22 compute-0 sudo[71631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:22 compute-0 python3.9[71634]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946781.7752125-270-180834281442443/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:22 compute-0 sudo[71631]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:23 compute-0 sudo[71784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owkueyrphhnkjnhnxoskacnmsrpcalck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946782.873874-288-174404037020248/AnsiballZ_timezone.py'
Feb 24 15:26:23 compute-0 sudo[71784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:23 compute-0 python3.9[71787]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 24 15:26:23 compute-0 systemd[1]: Starting Time & Date Service...
Feb 24 15:26:23 compute-0 systemd[1]: Started Time & Date Service.
Feb 24 15:26:23 compute-0 sudo[71784]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:23 compute-0 sudo[71941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsthoyrphgasnspvkuwzenavunuaieme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946783.7579472-297-54968797345610/AnsiballZ_file.py'
Feb 24 15:26:23 compute-0 sudo[71941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:24 compute-0 python3.9[71944]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:24 compute-0 sudo[71941]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:24 compute-0 sudo[72094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdrppwxtlnuxbaqoorhkgbbtylidrxpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946784.4958565-305-211901424756611/AnsiballZ_stat.py'
Feb 24 15:26:24 compute-0 sudo[72094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:24 compute-0 python3.9[72097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:24 compute-0 sudo[72094]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:25 compute-0 sudo[72218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asssftjeyiioivedvhvdfqxygniewivb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946784.4958565-305-211901424756611/AnsiballZ_copy.py'
Feb 24 15:26:25 compute-0 sudo[72218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:25 compute-0 python3.9[72221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946784.4958565-305-211901424756611/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:25 compute-0 sudo[72218]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:25 compute-0 sudo[72371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymqsjvwilmrmjunqesylbpimgygclyhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946785.6154537-320-28607020142879/AnsiballZ_stat.py'
Feb 24 15:26:25 compute-0 sudo[72371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:26 compute-0 python3.9[72374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:26 compute-0 sudo[72371]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:26 compute-0 sudo[72495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmxunnimhffatryrytukroibayoaaztu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946785.6154537-320-28607020142879/AnsiballZ_copy.py'
Feb 24 15:26:26 compute-0 sudo[72495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:26 compute-0 python3.9[72498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946785.6154537-320-28607020142879/.source.yaml _original_basename=.f34qq4ew follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:26 compute-0 sudo[72495]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:26 compute-0 sudo[72648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iphmxyaxfsvvbxrpzzyccgxtjgojtiks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946786.7366383-335-122300455777890/AnsiballZ_stat.py'
Feb 24 15:26:26 compute-0 sudo[72648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:27 compute-0 python3.9[72651]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:27 compute-0 sudo[72648]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:27 compute-0 sudo[72772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxhuthpmetktfjmpetbohiypxtppixac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946786.7366383-335-122300455777890/AnsiballZ_copy.py'
Feb 24 15:26:27 compute-0 sudo[72772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:27 compute-0 python3.9[72775]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946786.7366383-335-122300455777890/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:27 compute-0 sudo[72772]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:27 compute-0 sudo[72925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fblvhyoowzonleiiinkndjzfzoyqxeor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946787.7858212-350-41197360060419/AnsiballZ_command.py'
Feb 24 15:26:27 compute-0 sudo[72925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:28 compute-0 python3.9[72928]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:26:28 compute-0 sudo[72925]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:28 compute-0 sudo[73079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccmtgnkhgldywjjlhognjlapzrzrwggo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946788.3285856-358-102943426849544/AnsiballZ_command.py'
Feb 24 15:26:28 compute-0 sudo[73079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:28 compute-0 python3.9[73082]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:26:28 compute-0 sudo[73079]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:29 compute-0 sudo[73233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmunnirpkxyudodwbepxyvhrpiqdkwls ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771946788.8974128-366-4904174480630/AnsiballZ_edpm_nftables_from_files.py'
Feb 24 15:26:29 compute-0 sudo[73233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:29 compute-0 python3[73236]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 24 15:26:29 compute-0 sudo[73233]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:29 compute-0 sudo[73386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyzukgunidsrklcxdytehdswomppukxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946789.5818582-374-193787737081041/AnsiballZ_stat.py'
Feb 24 15:26:29 compute-0 sudo[73386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:30 compute-0 python3.9[73389]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:30 compute-0 sudo[73386]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:30 compute-0 sudo[73510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkvbtgkttsdkhqafhphxwanguvzyexwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946789.5818582-374-193787737081041/AnsiballZ_copy.py'
Feb 24 15:26:30 compute-0 sudo[73510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:30 compute-0 python3.9[73513]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946789.5818582-374-193787737081041/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:30 compute-0 sudo[73510]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:30 compute-0 sudo[73663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zghrnujnbnhrnvhjlblrksunyifqmgci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946790.6288128-389-249022593535837/AnsiballZ_stat.py'
Feb 24 15:26:30 compute-0 sudo[73663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:31 compute-0 python3.9[73666]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:31 compute-0 sudo[73663]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:31 compute-0 sudo[73787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiwvigrbmooyufodndsogxwpwhanwjpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946790.6288128-389-249022593535837/AnsiballZ_copy.py'
Feb 24 15:26:31 compute-0 sudo[73787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:31 compute-0 python3.9[73790]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946790.6288128-389-249022593535837/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:31 compute-0 sudo[73787]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:31 compute-0 sudo[73940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owbnaaoldjudpeghrbnuinrukuofeqfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946791.7160141-404-256549062221567/AnsiballZ_stat.py'
Feb 24 15:26:31 compute-0 sudo[73940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:32 compute-0 python3.9[73943]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:32 compute-0 sudo[73940]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:32 compute-0 sudo[74064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ennntubgsfqashpxnnuzliwnfqvsfewc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946791.7160141-404-256549062221567/AnsiballZ_copy.py'
Feb 24 15:26:32 compute-0 sudo[74064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:32 compute-0 python3.9[74067]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946791.7160141-404-256549062221567/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:32 compute-0 sudo[74064]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:33 compute-0 sudo[74217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwydfyxjznefmbhvlvxfdfmockuvnwbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946793.0764859-419-88019496389990/AnsiballZ_stat.py'
Feb 24 15:26:33 compute-0 sudo[74217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:33 compute-0 python3.9[74220]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:33 compute-0 sudo[74217]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:33 compute-0 sudo[74341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxndugaakhjfibtixfgwzyuqogdbcvex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946793.0764859-419-88019496389990/AnsiballZ_copy.py'
Feb 24 15:26:33 compute-0 sudo[74341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:34 compute-0 python3.9[74344]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946793.0764859-419-88019496389990/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:34 compute-0 sudo[74341]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:34 compute-0 sudo[74494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsfzqjimqnebwlmzdfdnyqmoudebbyvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946794.1927586-434-186931929932768/AnsiballZ_stat.py'
Feb 24 15:26:34 compute-0 sudo[74494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:34 compute-0 python3.9[74497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:26:34 compute-0 sudo[74494]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:34 compute-0 sudo[74618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xasobbttkdtmhpgwjhokzddhcmjhcvic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946794.1927586-434-186931929932768/AnsiballZ_copy.py'
Feb 24 15:26:34 compute-0 sudo[74618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:35 compute-0 python3.9[74621]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946794.1927586-434-186931929932768/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:35 compute-0 sudo[74618]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:35 compute-0 sudo[74771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrxxhsfjjmgmwzusskiszxodbbmkbkri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946795.3047047-449-56807788303051/AnsiballZ_file.py'
Feb 24 15:26:35 compute-0 sudo[74771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:35 compute-0 python3.9[74774]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:35 compute-0 sudo[74771]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:36 compute-0 sudo[74924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjmkygfrnrohtbnddumqsjjfqiggubgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946795.8352108-457-14604683880968/AnsiballZ_command.py'
Feb 24 15:26:36 compute-0 sudo[74924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:36 compute-0 python3.9[74927]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:26:36 compute-0 sudo[74924]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:36 compute-0 sudo[75084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfkpdqftuquxjjouglehdvzaveiaicva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946796.428886-465-152027148295614/AnsiballZ_blockinfile.py'
Feb 24 15:26:36 compute-0 sudo[75084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:37 compute-0 python3.9[75087]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:37 compute-0 sudo[75084]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:37 compute-0 sudo[75238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhaswvurjcoclhraguyzcireueavykki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946797.2748325-474-268620939066876/AnsiballZ_file.py'
Feb 24 15:26:37 compute-0 sudo[75238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:37 compute-0 python3.9[75241]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:37 compute-0 sudo[75238]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:37 compute-0 sudo[75391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xitpudpswtfheylpbqmnmrzkzkfuiwec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946797.7692096-474-147031230835153/AnsiballZ_file.py'
Feb 24 15:26:37 compute-0 sudo[75391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:38 compute-0 python3.9[75394]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:38 compute-0 sudo[75391]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:38 compute-0 sudo[75544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mozkispkpeqdyoykdakzjrppfmearvdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946798.3198774-489-30374809620578/AnsiballZ_mount.py'
Feb 24 15:26:38 compute-0 sudo[75544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:38 compute-0 python3.9[75547]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 24 15:26:38 compute-0 sudo[75544]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:38 compute-0 rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:26:38 compute-0 rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:26:39 compute-0 sudo[75699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqigvkueleowdfqxiyrebchwklhavrmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946799.0518005-489-7100252859858/AnsiballZ_mount.py'
Feb 24 15:26:39 compute-0 sudo[75699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:39 compute-0 python3.9[75702]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 24 15:26:39 compute-0 sudo[75699]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:39 compute-0 sshd-session[66435]: Connection closed by 192.168.122.30 port 49244
Feb 24 15:26:39 compute-0 sshd-session[66432]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:26:39 compute-0 systemd-logind[813]: Session 14 logged out. Waiting for processes to exit.
Feb 24 15:26:39 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Feb 24 15:26:39 compute-0 systemd[1]: session-14.scope: Consumed 28.747s CPU time.
Feb 24 15:26:39 compute-0 systemd-logind[813]: Removed session 14.
Feb 24 15:26:45 compute-0 sshd-session[75728]: Accepted publickey for zuul from 192.168.122.30 port 36300 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:26:45 compute-0 systemd-logind[813]: New session 15 of user zuul.
Feb 24 15:26:45 compute-0 systemd[1]: Started Session 15 of User zuul.
Feb 24 15:26:45 compute-0 sshd-session[75728]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:26:46 compute-0 sudo[75881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmjvrosoagnnkklyeroiqsaodbnnkcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946805.5103707-16-244832485278497/AnsiballZ_tempfile.py'
Feb 24 15:26:46 compute-0 sudo[75881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:46 compute-0 python3.9[75884]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 24 15:26:46 compute-0 sudo[75881]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:46 compute-0 sudo[76034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gojlwajydcycvwbdkehsdhdpsjdhuwno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946806.3556423-28-153798808699077/AnsiballZ_stat.py'
Feb 24 15:26:46 compute-0 sudo[76034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:46 compute-0 python3.9[76037]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:26:46 compute-0 sudo[76034]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:47 compute-0 sudo[76187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wicgzltgznnpknzhqapdtzezfzyvldvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946807.1508043-38-185915044997192/AnsiballZ_setup.py'
Feb 24 15:26:47 compute-0 sudo[76187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:48 compute-0 python3.9[76190]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:26:48 compute-0 sudo[76187]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:48 compute-0 sudo[76340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exxyefogyjmbupwqlsamyitayxekzhxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946808.2558436-47-56394087454582/AnsiballZ_blockinfile.py'
Feb 24 15:26:48 compute-0 sudo[76340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:48 compute-0 python3.9[76343]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC24s8MK6Op81KsYt3IywD1thxBe3IGeu976ff4rTl9sDo+aT2Hr3M+F0iSFQabGYiZv9IEocmq30Poj0jbQ14Uo/C9aKydD4XIYJyBNUcq7GvCRyUzmSzyEo7zUqQcXA9gwU3oXgOsdZeNJGPRNq+f8+gmyVZAOpjIiIP+HpyPVOzhu7uaH3nddMTWVzxiydcEJ9gOH5Hfe7+nQFCpJnb/DBQZmwZr3xjd5Sp0K/V4PTd+IhSnkQ3pYP+08olaAcLZ+7pRFp9nuNnQ/Jh8OpDaoyoZNFWhdup7VEewCM9vXQsCmyrCMsBS9MFlXuJGtbCWRI0jjvzBgFpF3Ni7VHsBdvHQfEyr2PIuyOKqRBbJ/TVWTiyqZ2RRZsrK0nhya6DLsdod3m5QctseN1ViwMWGZYXsESVi1jyy9Il06q/Jixu2xKrjBu+oQKWmEYf9ZNQsjANCMFmtt6bMhkfz7y9MEER9g3p6XF6m9jwfeGJHx1gUI5WR4HoDpg3SgsaeZTU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKkGUdlabcAj8AvnaYyphF2JnB0Odb2zb3YV/T+loS2W
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ0rfCZsSHIV1mefy+XIJ8NlgpCEMd7iagHb3nh6/0nclaQUtbgO8k9tKdN/BOw8oflX9T6DQtPNZ4n/7bRT6FE=
                                             create=True mode=0644 path=/tmp/ansible.ye37u7oz state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:48 compute-0 sudo[76340]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:49 compute-0 sudo[76493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvjjcsymwoxyvbzvxjlxtnurhjmnzbal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946809.1065922-55-128967534169923/AnsiballZ_command.py'
Feb 24 15:26:49 compute-0 sudo[76493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:49 compute-0 python3.9[76496]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ye37u7oz' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:26:49 compute-0 sudo[76493]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:50 compute-0 sudo[76648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkvmzklspcqyqpbcvlmuqlktanqzzqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946809.8523803-63-52679503502703/AnsiballZ_file.py'
Feb 24 15:26:50 compute-0 sudo[76648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:50 compute-0 python3.9[76651]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ye37u7oz state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:26:50 compute-0 sudo[76648]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:50 compute-0 sshd-session[75731]: Connection closed by 192.168.122.30 port 36300
Feb 24 15:26:50 compute-0 sshd-session[75728]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:26:50 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Feb 24 15:26:50 compute-0 systemd[1]: session-15.scope: Consumed 2.899s CPU time.
Feb 24 15:26:50 compute-0 systemd-logind[813]: Session 15 logged out. Waiting for processes to exit.
Feb 24 15:26:50 compute-0 systemd-logind[813]: Removed session 15.
Feb 24 15:26:53 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 24 15:26:56 compute-0 sshd-session[76678]: Accepted publickey for zuul from 192.168.122.30 port 56712 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:26:56 compute-0 systemd-logind[813]: New session 16 of user zuul.
Feb 24 15:26:56 compute-0 systemd[1]: Started Session 16 of User zuul.
Feb 24 15:26:56 compute-0 sshd-session[76678]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:26:57 compute-0 python3.9[76831]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:26:58 compute-0 sudo[76985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaacibovxroxnjelzrwlrpaahnmcxwea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946817.8684523-27-104224526323790/AnsiballZ_systemd.py'
Feb 24 15:26:58 compute-0 sudo[76985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:58 compute-0 python3.9[76988]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 24 15:26:58 compute-0 sudo[76985]: pam_unix(sudo:session): session closed for user root
Feb 24 15:26:59 compute-0 sudo[77140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzussthdgawjrqtthasexbaluqyvudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946818.948294-35-120535785841179/AnsiballZ_systemd.py'
Feb 24 15:26:59 compute-0 sudo[77140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:26:59 compute-0 python3.9[77143]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:26:59 compute-0 sudo[77140]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:00 compute-0 sudo[77294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blyfxfovcawqcnnjcsyxisuymlkkidey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946819.7264626-44-27338746617984/AnsiballZ_command.py'
Feb 24 15:27:00 compute-0 sudo[77294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:00 compute-0 python3.9[77297]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:27:00 compute-0 sudo[77294]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:00 compute-0 sudo[77448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krczxpixazjbkqtifpzsvrnnsutarwlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946820.5034637-52-18088294782010/AnsiballZ_stat.py'
Feb 24 15:27:00 compute-0 sudo[77448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:01 compute-0 python3.9[77451]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:27:01 compute-0 sudo[77448]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:01 compute-0 sudo[77603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hffzqehtdzkpjysphxdqrhxmtsyuveso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946821.2091455-60-88402447179072/AnsiballZ_command.py'
Feb 24 15:27:01 compute-0 sudo[77603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:01 compute-0 python3.9[77606]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:27:01 compute-0 sudo[77603]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:02 compute-0 sudo[77759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vveknrijmfcvwdwikbpckymizicbdznh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946821.8343947-68-10962731071315/AnsiballZ_file.py'
Feb 24 15:27:02 compute-0 sudo[77759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:02 compute-0 python3.9[77762]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:02 compute-0 sudo[77759]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:02 compute-0 sshd-session[76681]: Connection closed by 192.168.122.30 port 56712
Feb 24 15:27:02 compute-0 sshd-session[76678]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:27:02 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Feb 24 15:27:02 compute-0 systemd[1]: session-16.scope: Consumed 3.852s CPU time.
Feb 24 15:27:02 compute-0 systemd-logind[813]: Session 16 logged out. Waiting for processes to exit.
Feb 24 15:27:02 compute-0 systemd-logind[813]: Removed session 16.
Feb 24 15:27:05 compute-0 sshd-session[77787]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 15:27:07 compute-0 sshd-session[77789]: Accepted publickey for zuul from 192.168.122.30 port 36684 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:27:07 compute-0 systemd-logind[813]: New session 17 of user zuul.
Feb 24 15:27:07 compute-0 systemd[1]: Started Session 17 of User zuul.
Feb 24 15:27:07 compute-0 sshd-session[77789]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:27:08 compute-0 python3.9[77942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:27:09 compute-0 sudo[78096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsvweisgmbpvonkyqexxzunoonsmdtqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946829.061843-29-47757992755210/AnsiballZ_setup.py'
Feb 24 15:27:09 compute-0 sudo[78096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:09 compute-0 python3.9[78099]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:27:09 compute-0 sudo[78096]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:10 compute-0 sudo[78181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntkgvossskmemudaxkykphsulztwdsdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946829.061843-29-47757992755210/AnsiballZ_dnf.py'
Feb 24 15:27:10 compute-0 sudo[78181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:10 compute-0 python3.9[78184]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 24 15:27:11 compute-0 sudo[78181]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:12 compute-0 python3.9[78335]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:27:13 compute-0 python3.9[78486]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 24 15:27:14 compute-0 python3.9[78636]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:27:14 compute-0 python3.9[78786]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:27:15 compute-0 sshd-session[77792]: Connection closed by 192.168.122.30 port 36684
Feb 24 15:27:15 compute-0 sshd-session[77789]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:27:15 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Feb 24 15:27:15 compute-0 systemd[1]: session-17.scope: Consumed 4.967s CPU time.
Feb 24 15:27:15 compute-0 systemd-logind[813]: Session 17 logged out. Waiting for processes to exit.
Feb 24 15:27:15 compute-0 systemd-logind[813]: Removed session 17.
Feb 24 15:27:20 compute-0 sshd-session[78811]: Accepted publickey for zuul from 192.168.122.30 port 55168 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:27:20 compute-0 systemd-logind[813]: New session 18 of user zuul.
Feb 24 15:27:20 compute-0 systemd[1]: Started Session 18 of User zuul.
Feb 24 15:27:20 compute-0 sshd-session[78811]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:27:21 compute-0 python3.9[78964]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:27:23 compute-0 sudo[79118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlghbucuukmjkjbzbhjaiojodadzqljb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946842.8658934-45-263490088879682/AnsiballZ_file.py'
Feb 24 15:27:23 compute-0 sudo[79118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:23 compute-0 python3.9[79121]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:23 compute-0 sudo[79118]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:23 compute-0 sudo[79271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hunpsuozvryxiuxmqydcvopvlrnnztcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946843.673336-45-10681665069221/AnsiballZ_file.py'
Feb 24 15:27:23 compute-0 sudo[79271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:24 compute-0 python3.9[79274]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:24 compute-0 sudo[79271]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:24 compute-0 sudo[79424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dydulwgziqwewtupvdcmwzewwfmadtrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946844.3489735-60-180259830075364/AnsiballZ_stat.py'
Feb 24 15:27:24 compute-0 sudo[79424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:25 compute-0 python3.9[79427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:25 compute-0 sudo[79424]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:25 compute-0 sudo[79548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdwakrwurggbrcghhmrhjhckaruzvvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946844.3489735-60-180259830075364/AnsiballZ_copy.py'
Feb 24 15:27:25 compute-0 sudo[79548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:25 compute-0 python3.9[79551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946844.3489735-60-180259830075364/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=84ee23231f8ff54252d50220736ecae0aab882fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:25 compute-0 sudo[79548]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:26 compute-0 sudo[79701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdpwqeojgrwlmzippsjroxacsalshqrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946845.8120413-60-70466133654356/AnsiballZ_stat.py'
Feb 24 15:27:26 compute-0 sudo[79701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:26 compute-0 python3.9[79704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:26 compute-0 sudo[79701]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:26 compute-0 sudo[79825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjcbdbowkdsxjczsszjjahdnqpipzmvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946845.8120413-60-70466133654356/AnsiballZ_copy.py'
Feb 24 15:27:26 compute-0 sudo[79825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:26 compute-0 python3.9[79828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946845.8120413-60-70466133654356/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f34f6e662b8b528ea64a7465e7f5615e5ffb2816 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:26 compute-0 sudo[79825]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:27 compute-0 sudo[79978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtsntapjdwogsaccipwlhqojdxsaiakz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946846.871766-60-232790039348902/AnsiballZ_stat.py'
Feb 24 15:27:27 compute-0 sudo[79978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:27 compute-0 python3.9[79981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:27 compute-0 sudo[79978]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:27 compute-0 sudo[80102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izvmrgpmufmnufqqlybmpsohcqrpymmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946846.871766-60-232790039348902/AnsiballZ_copy.py'
Feb 24 15:27:27 compute-0 sudo[80102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:27 compute-0 python3.9[80105]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946846.871766-60-232790039348902/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=5d73f812dca0f6db2c277638d3aa63506f8d940e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:27 compute-0 sudo[80102]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:28 compute-0 sudo[80255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkzziojwhrpyamwvenqkkbtpjmycihwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946848.0278955-104-17251943900304/AnsiballZ_file.py'
Feb 24 15:27:28 compute-0 sudo[80255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:28 compute-0 python3.9[80258]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:28 compute-0 sudo[80255]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:28 compute-0 sudo[80408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogdojeowfackvuzpdobikrrnrldtyrcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946848.73732-104-260510826154805/AnsiballZ_file.py'
Feb 24 15:27:28 compute-0 sudo[80408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:29 compute-0 python3.9[80411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:29 compute-0 sudo[80408]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:29 compute-0 sudo[80561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chcwrjrkbgxsqjxzgfwiyfwngezcdpyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946849.3904314-119-112315029187227/AnsiballZ_stat.py'
Feb 24 15:27:29 compute-0 sudo[80561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:29 compute-0 python3.9[80564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:29 compute-0 sudo[80561]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:30 compute-0 sudo[80685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akkxuvmnzagiiwdfpzwibkbwizcixlrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946849.3904314-119-112315029187227/AnsiballZ_copy.py'
Feb 24 15:27:30 compute-0 sudo[80685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:30 compute-0 python3.9[80688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946849.3904314-119-112315029187227/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3d72985b6d1ff138af7e8325479964ca80e5204f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:30 compute-0 sudo[80685]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:30 compute-0 sudo[80838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmryvsrtzwckqzmjnrximpkxjutlkpdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946850.456662-119-175740491959805/AnsiballZ_stat.py'
Feb 24 15:27:30 compute-0 sudo[80838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:30 compute-0 python3.9[80841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:30 compute-0 sudo[80838]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:31 compute-0 sudo[80964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpitgylgygxahmzzrpefxaowfutmvwal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946850.456662-119-175740491959805/AnsiballZ_copy.py'
Feb 24 15:27:31 compute-0 sudo[80964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:31 compute-0 python3.9[80967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946850.456662-119-175740491959805/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f34f6e662b8b528ea64a7465e7f5615e5ffb2816 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:31 compute-0 sudo[80964]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:31 compute-0 sudo[81117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxicjugiplbhoedhsbiocjpvnytppmzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946851.4640656-119-12681371305086/AnsiballZ_stat.py'
Feb 24 15:27:31 compute-0 sudo[81117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:31 compute-0 python3.9[81120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:31 compute-0 sudo[81117]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:32 compute-0 sudo[81241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcuilrlvgdjvgdqjbyewyvorxplbctnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946851.4640656-119-12681371305086/AnsiballZ_copy.py'
Feb 24 15:27:32 compute-0 sudo[81241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:32 compute-0 python3.9[81244]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946851.4640656-119-12681371305086/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3ac1989ea15312bb6f5ccb8025d5ecb82c4260df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:32 compute-0 sudo[81241]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:32 compute-0 sudo[81394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjqcekiugminqaujtdxlrkvjmdllszfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946852.6647968-163-89526235776200/AnsiballZ_file.py'
Feb 24 15:27:32 compute-0 sudo[81394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:33 compute-0 python3.9[81397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:33 compute-0 sudo[81394]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:33 compute-0 sudo[81547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koyxvddzaphmspvqxwllghysisrilipi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946853.1671653-163-29598684388833/AnsiballZ_file.py'
Feb 24 15:27:33 compute-0 sudo[81547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:33 compute-0 python3.9[81550]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:33 compute-0 sudo[81547]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:33 compute-0 sshd-session[80866]: Received disconnect from 120.48.56.86 port 37264:11:  [preauth]
Feb 24 15:27:33 compute-0 sshd-session[80866]: Disconnected from authenticating user root 120.48.56.86 port 37264 [preauth]
Feb 24 15:27:33 compute-0 sudo[81700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvzyhllhmxwkeyathyemfxujjgexaukx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946853.7761989-178-9973314337827/AnsiballZ_stat.py'
Feb 24 15:27:33 compute-0 sudo[81700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:34 compute-0 python3.9[81703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:34 compute-0 sudo[81700]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:34 compute-0 sudo[81824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddckijmjjznevgowwvjlpgvsaiilzsap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946853.7761989-178-9973314337827/AnsiballZ_copy.py'
Feb 24 15:27:34 compute-0 sudo[81824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:34 compute-0 python3.9[81827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946853.7761989-178-9973314337827/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=40e8f33cfce943b9d7552ffbbee523833b279e73 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:34 compute-0 sudo[81824]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:35 compute-0 sudo[81977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxjbiefcectxblpjhaswwyfgimeghhxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946854.7651763-178-25549830485073/AnsiballZ_stat.py'
Feb 24 15:27:35 compute-0 sudo[81977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:35 compute-0 python3.9[81980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:35 compute-0 sudo[81977]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:35 compute-0 sudo[82101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjfbkttgpbufcsukgsykngbdqfkacwzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946854.7651763-178-25549830485073/AnsiballZ_copy.py'
Feb 24 15:27:35 compute-0 sudo[82101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:35 compute-0 python3.9[82104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946854.7651763-178-25549830485073/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=aa38b01bba72d49a23e241e5ea488b0dd7e2d088 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:35 compute-0 sudo[82101]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:36 compute-0 sudo[82254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiqauwmygllnblmifflwmrcwdtabygkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946855.8344595-178-216245237648438/AnsiballZ_stat.py'
Feb 24 15:27:36 compute-0 sudo[82254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:36 compute-0 python3.9[82257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:36 compute-0 sudo[82254]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:36 compute-0 sudo[82378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgmmqpxwztgrwrglvtofptndvdbtknuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946855.8344595-178-216245237648438/AnsiballZ_copy.py'
Feb 24 15:27:36 compute-0 sudo[82378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:36 compute-0 python3.9[82381]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946855.8344595-178-216245237648438/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=86abab5ed1b44ac5f7a0efba3fa57f3f256caa8b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:36 compute-0 sudo[82378]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:37 compute-0 sudo[82531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhagdlhwjetfxfdctciprzqafewjdczc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946857.0273166-222-129148608351216/AnsiballZ_file.py'
Feb 24 15:27:37 compute-0 sudo[82531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:37 compute-0 python3.9[82534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:37 compute-0 sudo[82531]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:37 compute-0 sudo[82684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfrhclxtpgzoqojzmifhxuestxhthzye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946857.6237524-222-170136568483691/AnsiballZ_file.py'
Feb 24 15:27:37 compute-0 sudo[82684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:38 compute-0 python3.9[82687]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:38 compute-0 sudo[82684]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:38 compute-0 sudo[82837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqpqdwyhwftciiqkbugaifovewuhnthu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946858.2527673-237-186235867307090/AnsiballZ_stat.py'
Feb 24 15:27:38 compute-0 sudo[82837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:38 compute-0 python3.9[82840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:38 compute-0 sudo[82837]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:38 compute-0 sudo[82961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxomcllprvskmnolphwmuyyhdbtssfvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946858.2527673-237-186235867307090/AnsiballZ_copy.py'
Feb 24 15:27:38 compute-0 sudo[82961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:39 compute-0 python3.9[82964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946858.2527673-237-186235867307090/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ac1b4cbd0b37c7ef1c2bc5f569d3065ebd7f1984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:39 compute-0 sudo[82961]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:39 compute-0 sudo[83114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iacoayycdilsfaqnsnxpolcvqbpwwhxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946859.2893927-237-169680664787053/AnsiballZ_stat.py'
Feb 24 15:27:39 compute-0 sudo[83114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:39 compute-0 python3.9[83117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:39 compute-0 sudo[83114]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:39 compute-0 sudo[83238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwianlqqoydzsofhxylpwxtaegkwccre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946859.2893927-237-169680664787053/AnsiballZ_copy.py'
Feb 24 15:27:39 compute-0 sudo[83238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:40 compute-0 python3.9[83241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946859.2893927-237-169680664787053/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a63ffff80b288be6cf0f7f832a64bd0def9513f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:40 compute-0 sudo[83238]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:40 compute-0 sudo[83391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeahahcemplztwykgsksdtqbwsgnjmvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946860.2719367-237-87355073622983/AnsiballZ_stat.py'
Feb 24 15:27:40 compute-0 sudo[83391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:40 compute-0 python3.9[83394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:40 compute-0 sudo[83391]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:41 compute-0 sudo[83515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pymvvbsniqdaxvsydasvxvcsslzyvusi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946860.2719367-237-87355073622983/AnsiballZ_copy.py'
Feb 24 15:27:41 compute-0 sudo[83515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:41 compute-0 python3.9[83518]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946860.2719367-237-87355073622983/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=5ef09fbbad91ce6c419d5a2fec14f8a0dc0146da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:41 compute-0 sudo[83515]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:41 compute-0 sudo[83668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vciuxyvdeeuhsjxqkcthghvmobqlkgjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946861.601751-281-58738411367967/AnsiballZ_file.py'
Feb 24 15:27:41 compute-0 sudo[83668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:42 compute-0 python3.9[83671]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:42 compute-0 sudo[83668]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:42 compute-0 sudo[83821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlhayaorqpodegdpfzcdoceqezecubqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946862.1770635-281-152361942519465/AnsiballZ_file.py'
Feb 24 15:27:42 compute-0 sudo[83821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:42 compute-0 python3.9[83824]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:42 compute-0 sudo[83821]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:43 compute-0 sudo[83974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fysxojqzaugpsecoqtocbdfqturuujph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946862.810615-296-111177452347968/AnsiballZ_stat.py'
Feb 24 15:27:43 compute-0 sudo[83974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:43 compute-0 python3.9[83977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:43 compute-0 sudo[83974]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:43 compute-0 sudo[84098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hirtabqlzbmkcnjldkthbosjykwpudrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946862.810615-296-111177452347968/AnsiballZ_copy.py'
Feb 24 15:27:43 compute-0 sudo[84098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:43 compute-0 python3.9[84101]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946862.810615-296-111177452347968/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f7dd95b89349e32b1124b8efcd835861358ec0ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:43 compute-0 sudo[84098]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:44 compute-0 sudo[84251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swsxvoopncruxzxwtkkodirzitqfygil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946863.8931909-296-181568112871408/AnsiballZ_stat.py'
Feb 24 15:27:44 compute-0 sudo[84251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:44 compute-0 python3.9[84254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:44 compute-0 sudo[84251]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:44 compute-0 sudo[84375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rietzjqojnmqicewcpkittorohqhfybt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946863.8931909-296-181568112871408/AnsiballZ_copy.py'
Feb 24 15:27:44 compute-0 sudo[84375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:44 compute-0 python3.9[84378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946863.8931909-296-181568112871408/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=aa38b01bba72d49a23e241e5ea488b0dd7e2d088 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:44 compute-0 sudo[84375]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:45 compute-0 sudo[84528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txsdgnvwfjtzjuqclqyifbgcihnocdcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946864.9892623-296-273203616436688/AnsiballZ_stat.py'
Feb 24 15:27:45 compute-0 sudo[84528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:45 compute-0 python3.9[84531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:45 compute-0 sudo[84528]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:45 compute-0 sudo[84652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exrewoecnajtmfokhytmomralhsamcuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946864.9892623-296-273203616436688/AnsiballZ_copy.py'
Feb 24 15:27:45 compute-0 sudo[84652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:46 compute-0 python3.9[84655]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946864.9892623-296-273203616436688/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=54e2622e10057c63cefcbcf5349e1cba20fcb1f1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:46 compute-0 sudo[84652]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:46 compute-0 sudo[84805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrwvmghnfuwijqekvfqxctqunpmohqhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946866.7482972-356-7981081382385/AnsiballZ_file.py'
Feb 24 15:27:46 compute-0 sudo[84805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:47 compute-0 python3.9[84808]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:47 compute-0 sudo[84805]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:47 compute-0 sudo[84958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akedznvtfquumnpfqbspveoscuhnljrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946867.250911-364-182232879435371/AnsiballZ_stat.py'
Feb 24 15:27:47 compute-0 sudo[84958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:47 compute-0 python3.9[84961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:47 compute-0 sudo[84958]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:47 compute-0 sudo[85082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htagnffdkclqvwllabqgnrbyolfpjkvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946867.250911-364-182232879435371/AnsiballZ_copy.py'
Feb 24 15:27:47 compute-0 sudo[85082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:48 compute-0 python3.9[85085]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946867.250911-364-182232879435371/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=390fc204118afdd868ab9921a76d5494ac65a62e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:48 compute-0 sudo[85082]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:48 compute-0 sudo[85235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkzoqwxdzdbhypmlhjslqztbswowqdrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946868.322106-380-199917334216403/AnsiballZ_file.py'
Feb 24 15:27:48 compute-0 sudo[85235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:48 compute-0 python3.9[85238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:48 compute-0 sudo[85235]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:49 compute-0 sudo[85388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvfeviexyfjjbgojfivymjsvulusbdxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946868.8682516-388-171638938331752/AnsiballZ_stat.py'
Feb 24 15:27:49 compute-0 sudo[85388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:49 compute-0 python3.9[85391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:49 compute-0 sudo[85388]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:49 compute-0 sudo[85512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssxxwgiloovzqmdjcrctclbqdjzohtgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946868.8682516-388-171638938331752/AnsiballZ_copy.py'
Feb 24 15:27:49 compute-0 sudo[85512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:49 compute-0 python3.9[85515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946868.8682516-388-171638938331752/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=390fc204118afdd868ab9921a76d5494ac65a62e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:49 compute-0 sudo[85512]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:50 compute-0 sudo[85665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdehnyypiqbeuuieqabmptykemuiilzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946869.9157286-404-237594858216714/AnsiballZ_file.py'
Feb 24 15:27:50 compute-0 sudo[85665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:50 compute-0 python3.9[85668]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:50 compute-0 sudo[85665]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:50 compute-0 sudo[85818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzmgpohtnzjkrpoaioraxlmvtuvyoza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946870.6046257-412-214560139306753/AnsiballZ_stat.py'
Feb 24 15:27:50 compute-0 sudo[85818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:51 compute-0 python3.9[85821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:51 compute-0 sudo[85818]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:51 compute-0 sudo[85942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttdwewhbyvyvigzuqackcilxiduydpdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946870.6046257-412-214560139306753/AnsiballZ_copy.py'
Feb 24 15:27:51 compute-0 sudo[85942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:51 compute-0 python3.9[85945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946870.6046257-412-214560139306753/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=390fc204118afdd868ab9921a76d5494ac65a62e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:51 compute-0 sudo[85942]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:51 compute-0 sudo[86095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajlzkvqegzhobbvgwawpulfddnkbnked ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946871.7328806-428-218302638237301/AnsiballZ_file.py'
Feb 24 15:27:51 compute-0 sudo[86095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:52 compute-0 python3.9[86098]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:52 compute-0 sudo[86095]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:52 compute-0 sudo[86248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npvaeqiwcasavporxueeyszoihfyiapp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946872.3269129-436-128691216690898/AnsiballZ_stat.py'
Feb 24 15:27:52 compute-0 sudo[86248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:52 compute-0 python3.9[86251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:52 compute-0 sudo[86248]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:53 compute-0 sudo[86372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryxtkbdpdzufgsudqjbxqcumoogzhluv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946872.3269129-436-128691216690898/AnsiballZ_copy.py'
Feb 24 15:27:53 compute-0 sudo[86372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:53 compute-0 python3.9[86375]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946872.3269129-436-128691216690898/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=390fc204118afdd868ab9921a76d5494ac65a62e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:53 compute-0 sudo[86372]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:53 compute-0 sudo[86525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaugyqttfcgywnacomrraxclcsuuovjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946873.477405-452-244817122519313/AnsiballZ_file.py'
Feb 24 15:27:53 compute-0 sudo[86525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:53 compute-0 python3.9[86528]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:53 compute-0 sudo[86525]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:54 compute-0 sudo[86678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkhaghnoqcwvwutlbujfnzhyecamaxxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946874.082173-460-57505972399950/AnsiballZ_stat.py'
Feb 24 15:27:54 compute-0 sudo[86678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:54 compute-0 python3.9[86681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:54 compute-0 sudo[86678]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:54 compute-0 sudo[86802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muafrzhxkbmegmyayzfndtcxxfayvzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946874.082173-460-57505972399950/AnsiballZ_copy.py'
Feb 24 15:27:54 compute-0 sudo[86802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:55 compute-0 python3.9[86805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946874.082173-460-57505972399950/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=390fc204118afdd868ab9921a76d5494ac65a62e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:55 compute-0 sudo[86802]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:55 compute-0 sudo[86955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kodenwsgfvmzyykqjzrfkrcbymphzhzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946875.4158313-476-149996629285379/AnsiballZ_file.py'
Feb 24 15:27:55 compute-0 sudo[86955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:56 compute-0 python3.9[86958]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:56 compute-0 sudo[86955]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:56 compute-0 sudo[87108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimwmrpsihnarndtcxvjynyvufybszkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946876.3129928-484-135811681399963/AnsiballZ_stat.py'
Feb 24 15:27:56 compute-0 sudo[87108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:56 compute-0 python3.9[87111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:56 compute-0 sudo[87108]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:57 compute-0 sudo[87232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gltvjdmzkzhmigkjtnwjrnbayxkprpic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946876.3129928-484-135811681399963/AnsiballZ_copy.py'
Feb 24 15:27:57 compute-0 sudo[87232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:57 compute-0 python3.9[87235]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946876.3129928-484-135811681399963/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=390fc204118afdd868ab9921a76d5494ac65a62e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:57 compute-0 sudo[87232]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:57 compute-0 sudo[87385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynpxtaajokmyiydwadnorhxtofrbnltg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946877.4088805-500-94284513996019/AnsiballZ_file.py'
Feb 24 15:27:57 compute-0 sudo[87385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:57 compute-0 python3.9[87388]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:57 compute-0 sudo[87385]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:58 compute-0 sudo[87538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuiakiisoebnrihcgnarxbwdhjcvjsyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946877.9729562-508-115389794859625/AnsiballZ_stat.py'
Feb 24 15:27:58 compute-0 sudo[87538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:58 compute-0 python3.9[87541]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:27:58 compute-0 sudo[87538]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:58 compute-0 sudo[87662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usvpbntamxycwemerrxiogbawvcxmigy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946877.9729562-508-115389794859625/AnsiballZ_copy.py'
Feb 24 15:27:58 compute-0 sudo[87662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:58 compute-0 python3.9[87665]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946877.9729562-508-115389794859625/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=390fc204118afdd868ab9921a76d5494ac65a62e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:27:58 compute-0 sudo[87662]: pam_unix(sudo:session): session closed for user root
Feb 24 15:27:59 compute-0 sudo[87815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bivshaqpvayvrsygyxvbpjwdnrgjpgui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946879.316691-524-144379241257312/AnsiballZ_file.py'
Feb 24 15:27:59 compute-0 sudo[87815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:27:59 compute-0 python3.9[87818]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:27:59 compute-0 sudo[87815]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:00 compute-0 sudo[87968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sitencjuktshwidocbtauzjcwynglsub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946879.918178-532-166543764465273/AnsiballZ_stat.py'
Feb 24 15:28:00 compute-0 sudo[87968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:00 compute-0 python3.9[87971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:00 compute-0 sudo[87968]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:00 compute-0 sudo[88092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tacfuncsxyofnadvjzdfezxmavdansao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946879.918178-532-166543764465273/AnsiballZ_copy.py'
Feb 24 15:28:00 compute-0 sudo[88092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:00 compute-0 chronyd[66406]: Selected source 23.133.168.246 (pool.ntp.org)
Feb 24 15:28:00 compute-0 python3.9[88095]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946879.918178-532-166543764465273/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=390fc204118afdd868ab9921a76d5494ac65a62e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:00 compute-0 sudo[88092]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:01 compute-0 sshd-session[78814]: Connection closed by 192.168.122.30 port 55168
Feb 24 15:28:01 compute-0 sshd-session[78811]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:28:01 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Feb 24 15:28:01 compute-0 systemd-logind[813]: Session 18 logged out. Waiting for processes to exit.
Feb 24 15:28:01 compute-0 systemd[1]: session-18.scope: Consumed 29.566s CPU time.
Feb 24 15:28:01 compute-0 systemd-logind[813]: Removed session 18.
Feb 24 15:28:06 compute-0 sshd-session[88120]: Accepted publickey for zuul from 192.168.122.30 port 32804 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:28:06 compute-0 systemd-logind[813]: New session 19 of user zuul.
Feb 24 15:28:06 compute-0 systemd[1]: Started Session 19 of User zuul.
Feb 24 15:28:06 compute-0 sshd-session[88120]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:28:07 compute-0 python3.9[88273]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:28:08 compute-0 sudo[88427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ommtohhvqxmqxjofnqpbguyciyoeibsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946887.81361-29-259809267273221/AnsiballZ_file.py'
Feb 24 15:28:08 compute-0 sudo[88427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:08 compute-0 python3.9[88430]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:28:08 compute-0 sudo[88427]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:08 compute-0 sudo[88580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpnvhnrnackvzrxexscfosxcuwrrrjoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946888.5579119-29-44048438333267/AnsiballZ_file.py'
Feb 24 15:28:08 compute-0 sudo[88580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:09 compute-0 python3.9[88583]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:28:09 compute-0 sudo[88580]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:09 compute-0 python3.9[88733]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:28:10 compute-0 sudo[88883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csgutpjlnpqhbtpargzgvdvfnymhzpzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946889.948418-52-70887878642037/AnsiballZ_seboolean.py'
Feb 24 15:28:10 compute-0 sudo[88883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:10 compute-0 python3.9[88886]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 24 15:28:11 compute-0 sudo[88883]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:12 compute-0 sudo[89040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eidonxrenbfgarttvnhlszkadwfcfkli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946891.7801173-62-228600228070976/AnsiballZ_setup.py'
Feb 24 15:28:12 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb 24 15:28:12 compute-0 sudo[89040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:12 compute-0 python3.9[89043]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:28:12 compute-0 sudo[89040]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:12 compute-0 sudo[89125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwjndbggxkkgusuoqxjffpfscpstklbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946891.7801173-62-228600228070976/AnsiballZ_dnf.py'
Feb 24 15:28:12 compute-0 sudo[89125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:13 compute-0 python3.9[89128]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:28:14 compute-0 sudo[89125]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:15 compute-0 sudo[89279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnvnyapixsohwgwezfaqgrxpllndhyek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946894.9726832-74-202173168526060/AnsiballZ_systemd.py'
Feb 24 15:28:15 compute-0 sudo[89279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:15 compute-0 python3.9[89282]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 24 15:28:15 compute-0 sudo[89279]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:16 compute-0 sudo[89435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krpyzzgfkgdfmfdxuanbpqblcllvclja ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771946896.035378-82-226618339624477/AnsiballZ_edpm_nftables_snippet.py'
Feb 24 15:28:16 compute-0 sudo[89435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:16 compute-0 python3[89438]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb 24 15:28:16 compute-0 sudo[89435]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:17 compute-0 sudo[89588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhluilcrbpmpbxuakjmvlvmuuevddycp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946896.8504238-91-199556388240518/AnsiballZ_file.py'
Feb 24 15:28:17 compute-0 sudo[89588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:17 compute-0 python3.9[89591]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:17 compute-0 sudo[89588]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:17 compute-0 sudo[89741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gneuuepwjejpkhnzqkhoayjbhuyfjfcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946897.4801013-99-152569942043090/AnsiballZ_stat.py'
Feb 24 15:28:17 compute-0 sudo[89741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:18 compute-0 python3.9[89744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:18 compute-0 sudo[89741]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:18 compute-0 sudo[89820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgzjpuxwckjkainybldqikofuomzfxwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946897.4801013-99-152569942043090/AnsiballZ_file.py'
Feb 24 15:28:18 compute-0 sudo[89820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:18 compute-0 python3.9[89823]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:18 compute-0 sudo[89820]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:18 compute-0 sudo[89973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyqzjmqmahqtqiomjgulowskagrbmrvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946898.6593893-111-204292649980059/AnsiballZ_stat.py'
Feb 24 15:28:18 compute-0 sudo[89973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:19 compute-0 python3.9[89976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:19 compute-0 sudo[89973]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:19 compute-0 sudo[90052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgwepbzugtctbvrfubbqaxarhzzzyglp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946898.6593893-111-204292649980059/AnsiballZ_file.py'
Feb 24 15:28:19 compute-0 sudo[90052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:19 compute-0 python3.9[90055]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.x4fvewde recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:19 compute-0 sudo[90052]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:19 compute-0 sudo[90205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fadfcdquwvcktjsflyzpmbbzlgnyvtuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946899.6633608-123-80570204662768/AnsiballZ_stat.py'
Feb 24 15:28:19 compute-0 sudo[90205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:20 compute-0 python3.9[90208]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:20 compute-0 sudo[90205]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:20 compute-0 sudo[90284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iypzzeyorabqfawbscjveahskpccwwiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946899.6633608-123-80570204662768/AnsiballZ_file.py'
Feb 24 15:28:20 compute-0 sudo[90284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:20 compute-0 python3.9[90287]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:20 compute-0 sudo[90284]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:21 compute-0 sudo[90437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhufdvgmmclbnaicssrfbnffirqyliti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946900.768143-136-140728480282082/AnsiballZ_command.py'
Feb 24 15:28:21 compute-0 sudo[90437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:21 compute-0 python3.9[90440]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:28:21 compute-0 sudo[90437]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:21 compute-0 sudo[90591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnkugcphujktwvmocswenjqpuesnomrd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771946901.503573-144-224501523746062/AnsiballZ_edpm_nftables_from_files.py'
Feb 24 15:28:21 compute-0 sudo[90591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:22 compute-0 python3[90594]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 24 15:28:22 compute-0 sudo[90591]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:22 compute-0 sudo[90744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soscemahhycmgqokopllwungzbwkwtdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946902.4052842-152-244056817951803/AnsiballZ_stat.py'
Feb 24 15:28:22 compute-0 sudo[90744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:22 compute-0 python3.9[90747]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:22 compute-0 sudo[90744]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:23 compute-0 sudo[90870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmujwxjotkanzzwhqyjgfrvuoiudwhsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946902.4052842-152-244056817951803/AnsiballZ_copy.py'
Feb 24 15:28:23 compute-0 sudo[90870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:23 compute-0 python3.9[90873]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946902.4052842-152-244056817951803/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:23 compute-0 sudo[90870]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:23 compute-0 sudo[91023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeqklnvenexenaokopxagdyjxngxhwzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946903.6302104-167-103268927414525/AnsiballZ_stat.py'
Feb 24 15:28:23 compute-0 sudo[91023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:24 compute-0 python3.9[91026]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:24 compute-0 sudo[91023]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:24 compute-0 sudo[91149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvsfppjsbicassgmtsjywklwxqgcnde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946903.6302104-167-103268927414525/AnsiballZ_copy.py'
Feb 24 15:28:24 compute-0 sudo[91149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:24 compute-0 python3.9[91152]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946903.6302104-167-103268927414525/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:24 compute-0 sudo[91149]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:24 compute-0 sudo[91302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpfuxslkurkmnokoqykatnuvebpisezf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946904.7284153-182-137708982964191/AnsiballZ_stat.py'
Feb 24 15:28:24 compute-0 sudo[91302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:25 compute-0 python3.9[91305]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:25 compute-0 sudo[91302]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:25 compute-0 sudo[91428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iudzaybpzymnoliesvsuvunagxvtvmsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946904.7284153-182-137708982964191/AnsiballZ_copy.py'
Feb 24 15:28:25 compute-0 sudo[91428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:25 compute-0 python3.9[91431]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946904.7284153-182-137708982964191/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:25 compute-0 sudo[91428]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:26 compute-0 sudo[91581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyhsfngpcafbqdrihybcjqwmrrzeuang ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946905.8164046-197-9602255331857/AnsiballZ_stat.py'
Feb 24 15:28:26 compute-0 sudo[91581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:26 compute-0 python3.9[91584]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:26 compute-0 sudo[91581]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:26 compute-0 sudo[91707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njrlcqzxvqosurghftscczfqugmuxrpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946905.8164046-197-9602255331857/AnsiballZ_copy.py'
Feb 24 15:28:26 compute-0 sudo[91707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:26 compute-0 python3.9[91710]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946905.8164046-197-9602255331857/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:26 compute-0 sudo[91707]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:27 compute-0 sudo[91860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obsoavozzmpkfqetdxhopfhwhwrngwqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946906.963827-212-130182002093225/AnsiballZ_stat.py'
Feb 24 15:28:27 compute-0 sudo[91860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:27 compute-0 python3.9[91863]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:27 compute-0 sudo[91860]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:27 compute-0 sudo[91986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxedkbldyeevvfaufozidzjceykceixt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946906.963827-212-130182002093225/AnsiballZ_copy.py'
Feb 24 15:28:27 compute-0 sudo[91986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:27 compute-0 python3.9[91989]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771946906.963827-212-130182002093225/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:27 compute-0 sudo[91986]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:28 compute-0 sudo[92139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxonqxultmdsaolfwpggxojuxxjukgnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946908.1421602-227-253617703705473/AnsiballZ_file.py'
Feb 24 15:28:28 compute-0 sudo[92139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:28 compute-0 python3.9[92142]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:28 compute-0 sudo[92139]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:28 compute-0 sudo[92292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjhaivkspktoimkhlfphaqqmviykortf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946908.7334342-235-141141753159728/AnsiballZ_command.py'
Feb 24 15:28:29 compute-0 sudo[92292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:29 compute-0 python3.9[92295]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:28:29 compute-0 sudo[92292]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:29 compute-0 sudo[92448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqjmxwspfzrtxwyrbjeutzwemhdupphu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946909.4022722-243-145122574494011/AnsiballZ_blockinfile.py'
Feb 24 15:28:29 compute-0 sudo[92448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:29 compute-0 python3.9[92451]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:29 compute-0 sudo[92448]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:30 compute-0 sudo[92601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlvjcdwvxjpnxwjjpqheaqbmhsypolmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946910.1981251-252-247388952179515/AnsiballZ_command.py'
Feb 24 15:28:30 compute-0 sudo[92601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:30 compute-0 python3.9[92604]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:28:30 compute-0 sudo[92601]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:31 compute-0 sudo[92755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwgluabnzofitqnwgvcqtdjbqsyehkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946910.8691204-260-54983446034887/AnsiballZ_stat.py'
Feb 24 15:28:31 compute-0 sudo[92755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:31 compute-0 python3.9[92758]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:28:31 compute-0 sudo[92755]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:31 compute-0 sudo[92910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypvqkapohurrbchpmyurxeagodhsqidz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946911.4005575-268-150404630662354/AnsiballZ_command.py'
Feb 24 15:28:31 compute-0 sudo[92910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:31 compute-0 python3.9[92913]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:28:31 compute-0 sudo[92910]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:32 compute-0 sudo[93066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxwupbspnxkeyrhsdwlxvtvzznfjgtto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946911.9880462-276-18663589700179/AnsiballZ_file.py'
Feb 24 15:28:32 compute-0 sudo[93066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:32 compute-0 python3.9[93069]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:32 compute-0 sudo[93066]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:33 compute-0 python3.9[93219]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:28:34 compute-0 sudo[93370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjzdkyglwyzzdxsbtiltrrervuokjucc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946914.12594-317-266591463275233/AnsiballZ_command.py'
Feb 24 15:28:34 compute-0 sudo[93370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:34 compute-0 python3.9[93373]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:2f:db:26:37" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:28:34 compute-0 ovs-vsctl[93374]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:2f:db:26:37 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb 24 15:28:34 compute-0 sudo[93370]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:35 compute-0 sudo[93524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkgdvnqrlgxolrtwwesdvufulnvcovwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946914.887305-326-100550221650103/AnsiballZ_command.py'
Feb 24 15:28:35 compute-0 sudo[93524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:35 compute-0 python3.9[93527]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:28:35 compute-0 sudo[93524]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:35 compute-0 sudo[93680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnohaqojrkiydnyvwadgowdspzygmszl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946915.541792-334-80752638829517/AnsiballZ_command.py'
Feb 24 15:28:35 compute-0 sudo[93680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:35 compute-0 python3.9[93683]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:28:35 compute-0 ovs-vsctl[93684]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb 24 15:28:36 compute-0 sudo[93680]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:36 compute-0 python3.9[93834]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:28:37 compute-0 sudo[93986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-extnteysqrxllxuninxcuiwwwcipyyaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946916.8153224-351-46483691537781/AnsiballZ_file.py'
Feb 24 15:28:37 compute-0 sudo[93986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:37 compute-0 python3.9[93989]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:28:37 compute-0 sudo[93986]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:37 compute-0 sudo[94139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhvpkofblodghhjlmbdyadjazdcjhytp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946917.3842475-359-189811262553669/AnsiballZ_stat.py'
Feb 24 15:28:37 compute-0 sudo[94139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:37 compute-0 python3.9[94142]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:37 compute-0 sudo[94139]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:38 compute-0 sudo[94218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alvnejulrvaeudiosbjyaarmjvhsjrll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946917.3842475-359-189811262553669/AnsiballZ_file.py'
Feb 24 15:28:38 compute-0 sudo[94218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:38 compute-0 python3.9[94221]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:28:38 compute-0 sudo[94218]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:38 compute-0 sudo[94371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diujfukgqkfevezbvlekqbenozuqcops ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946918.4000647-359-211936993752743/AnsiballZ_stat.py'
Feb 24 15:28:38 compute-0 sudo[94371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:38 compute-0 python3.9[94374]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:38 compute-0 sudo[94371]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:39 compute-0 sudo[94450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esvumujeldbalhxxcqdfkzffdhpqoroi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946918.4000647-359-211936993752743/AnsiballZ_file.py'
Feb 24 15:28:39 compute-0 sudo[94450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:39 compute-0 python3.9[94453]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:28:39 compute-0 sudo[94450]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:39 compute-0 sudo[94603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdttifistdijdbriqycftmeoauuzphon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946919.391651-382-121170143843427/AnsiballZ_file.py'
Feb 24 15:28:39 compute-0 sudo[94603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:39 compute-0 python3.9[94606]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:39 compute-0 sudo[94603]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:40 compute-0 sudo[94756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnwwqcohqysijjelsgkkwsnbnziqpyyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946919.9780245-390-140708335306660/AnsiballZ_stat.py'
Feb 24 15:28:40 compute-0 sudo[94756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:40 compute-0 python3.9[94759]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:40 compute-0 sudo[94756]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:40 compute-0 sudo[94835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtrqqonihgutojecpgxcgzxtxoujylxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946919.9780245-390-140708335306660/AnsiballZ_file.py'
Feb 24 15:28:40 compute-0 sudo[94835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:40 compute-0 python3.9[94838]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:40 compute-0 sudo[94835]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:41 compute-0 sudo[94988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajckwwrenwetyhayzkszeesaocctydsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946920.9879713-402-107331480911585/AnsiballZ_stat.py'
Feb 24 15:28:41 compute-0 sudo[94988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:41 compute-0 python3.9[94991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:41 compute-0 sudo[94988]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:41 compute-0 sudo[95067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nepsihwbxvvwamdsotvhgeeadobayrij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946920.9879713-402-107331480911585/AnsiballZ_file.py'
Feb 24 15:28:41 compute-0 sudo[95067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:41 compute-0 python3.9[95070]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:41 compute-0 sudo[95067]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:42 compute-0 sudo[95220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdsagbsupdpbhljzdeocoofslcxgwdzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946921.9810107-414-221105305596424/AnsiballZ_systemd.py'
Feb 24 15:28:42 compute-0 sudo[95220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:42 compute-0 python3.9[95223]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:28:42 compute-0 systemd[1]: Reloading.
Feb 24 15:28:42 compute-0 systemd-sysv-generator[95249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:28:42 compute-0 systemd-rc-local-generator[95245]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:28:42 compute-0 sudo[95220]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:43 compute-0 sudo[95417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbajtdgdipwsomgiylqgfueymripkwft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946922.9274902-422-250827402289473/AnsiballZ_stat.py'
Feb 24 15:28:43 compute-0 sudo[95417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:43 compute-0 python3.9[95420]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:43 compute-0 sudo[95417]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:43 compute-0 sudo[95496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvrghtepmhrihzgrktrglwzfdvqcqyae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946922.9274902-422-250827402289473/AnsiballZ_file.py'
Feb 24 15:28:43 compute-0 sudo[95496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:43 compute-0 python3.9[95499]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:43 compute-0 sudo[95496]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:44 compute-0 sudo[95649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trvxfzpvqfwpyeqomvlgzqlgjcvcyymg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946924.0326045-434-762563150078/AnsiballZ_stat.py'
Feb 24 15:28:44 compute-0 sudo[95649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:44 compute-0 python3.9[95652]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:44 compute-0 sudo[95649]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:44 compute-0 sudo[95728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dddtlwvzlpkfeqzvnvsyubzpmckjbvym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946924.0326045-434-762563150078/AnsiballZ_file.py'
Feb 24 15:28:44 compute-0 sudo[95728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:44 compute-0 python3.9[95731]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:44 compute-0 sudo[95728]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:45 compute-0 sudo[95881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnzavcqrriqdidbsxiybpghrgktmzclu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946925.0081658-446-279068333440590/AnsiballZ_systemd.py'
Feb 24 15:28:45 compute-0 sudo[95881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:45 compute-0 python3.9[95884]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:28:45 compute-0 systemd[1]: Reloading.
Feb 24 15:28:45 compute-0 systemd-rc-local-generator[95914]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:28:45 compute-0 systemd-sysv-generator[95918]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:28:45 compute-0 systemd[1]: Starting Create netns directory...
Feb 24 15:28:45 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 24 15:28:45 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 24 15:28:45 compute-0 systemd[1]: Finished Create netns directory.
Feb 24 15:28:45 compute-0 sudo[95881]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:46 compute-0 sudo[96083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twhkdfmdipptwxnqxcfbniiblndfkcwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946926.0868142-456-238777536878389/AnsiballZ_file.py'
Feb 24 15:28:46 compute-0 sudo[96083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:46 compute-0 python3.9[96086]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:28:46 compute-0 sudo[96083]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:46 compute-0 sudo[96236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzcbdwusykeqdpkzjemiubkmojhdmkca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946926.6662948-464-252828116401144/AnsiballZ_stat.py'
Feb 24 15:28:46 compute-0 sudo[96236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:47 compute-0 python3.9[96239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:47 compute-0 sudo[96236]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:47 compute-0 sudo[96360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plddbqiwstjasewmqrkwiovewjitcsvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946926.6662948-464-252828116401144/AnsiballZ_copy.py'
Feb 24 15:28:47 compute-0 sudo[96360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:47 compute-0 python3.9[96363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946926.6662948-464-252828116401144/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:28:47 compute-0 sudo[96360]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:48 compute-0 sudo[96513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prmnwdculfdzqyzfowlhbupdignsojzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946927.9035542-481-270782845287061/AnsiballZ_file.py'
Feb 24 15:28:48 compute-0 sudo[96513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:48 compute-0 python3.9[96516]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:48 compute-0 sudo[96513]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:48 compute-0 sudo[96666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojfnwbtnyzniiajpswgskoatsqviyzdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946928.5675323-489-236422265248058/AnsiballZ_file.py'
Feb 24 15:28:48 compute-0 sudo[96666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:48 compute-0 python3.9[96669]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:28:49 compute-0 sudo[96666]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:49 compute-0 sudo[96819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtgczkykftvzdfmipbbhyqnicdpbcihf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946929.21054-497-170551351500347/AnsiballZ_stat.py'
Feb 24 15:28:49 compute-0 sudo[96819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:49 compute-0 python3.9[96822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:28:49 compute-0 sudo[96819]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:50 compute-0 sudo[96943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrphkwrksprvehoncqybhxeijjfzlehg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946929.21054-497-170551351500347/AnsiballZ_copy.py'
Feb 24 15:28:50 compute-0 sudo[96943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:50 compute-0 python3.9[96946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946929.21054-497-170551351500347/.source.json _original_basename=.3blnrwts follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:50 compute-0 sudo[96943]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:50 compute-0 python3.9[97096]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:52 compute-0 sudo[97517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lusnpdoyxoojgeqeyeldypllejbsxdzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946932.502009-537-40341491957912/AnsiballZ_container_config_data.py'
Feb 24 15:28:52 compute-0 sudo[97517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:53 compute-0 python3.9[97520]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb 24 15:28:53 compute-0 sudo[97517]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:53 compute-0 sudo[97670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjobdecskrjhmjytanjbssabqwakouhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946933.407173-548-232235207369222/AnsiballZ_container_config_hash.py'
Feb 24 15:28:53 compute-0 sudo[97670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:54 compute-0 python3.9[97673]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:28:54 compute-0 sudo[97670]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:54 compute-0 sudo[97823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avqglztoxgwblyxhssxeywyxolesgxes ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771946934.387053-558-84348314477716/AnsiballZ_edpm_container_manage.py'
Feb 24 15:28:54 compute-0 sudo[97823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:55 compute-0 python3[97826]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:28:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:28:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:28:55 compute-0 podman[97863]: 2026-02-24 15:28:55.276460659 +0000 UTC m=+0.049017349 container create 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Feb 24 15:28:55 compute-0 podman[97863]: 2026-02-24 15:28:55.247875125 +0000 UTC m=+0.020431825 image pull ce6781f051bf092c13d84cb587c56ad7edaa58b70fcc0effc1dff15724d5232e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 24 15:28:55 compute-0 python3[97826]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 24 15:28:55 compute-0 sudo[97823]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:55 compute-0 sudo[98051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcjmabfuptuswjgmnbynksunmogskncw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946935.5435138-566-150780706944475/AnsiballZ_stat.py'
Feb 24 15:28:55 compute-0 sudo[98051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:55 compute-0 python3.9[98054]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:28:56 compute-0 sudo[98051]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 24 15:28:56 compute-0 sudo[98206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suxvnggogseomyxqccoxbuaiwdpkuogr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946936.3016448-575-23824272519986/AnsiballZ_file.py'
Feb 24 15:28:56 compute-0 sudo[98206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:56 compute-0 python3.9[98209]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:56 compute-0 sudo[98206]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:56 compute-0 sudo[98283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rejaowxnzmyggolrjqbiljrpkfljqqnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946936.3016448-575-23824272519986/AnsiballZ_stat.py'
Feb 24 15:28:56 compute-0 sudo[98283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:57 compute-0 python3.9[98286]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:28:57 compute-0 sudo[98283]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:58 compute-0 sudo[98435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjyoaxmgtmwljoxktbztcqakvjhltgjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946937.2478147-575-166352976918991/AnsiballZ_copy.py'
Feb 24 15:28:58 compute-0 sudo[98435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:58 compute-0 python3.9[98438]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771946937.2478147-575-166352976918991/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:28:58 compute-0 sudo[98435]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:59 compute-0 sudo[98514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfvmgpsjanhqhtohrsugxgqrknsrscmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946937.2478147-575-166352976918991/AnsiballZ_systemd.py'
Feb 24 15:28:59 compute-0 sudo[98514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:28:59 compute-0 python3.9[98517]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:28:59 compute-0 systemd[1]: Reloading.
Feb 24 15:28:59 compute-0 systemd-rc-local-generator[98542]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:28:59 compute-0 systemd-sysv-generator[98545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:28:59 compute-0 sudo[98514]: pam_unix(sudo:session): session closed for user root
Feb 24 15:28:59 compute-0 sudo[98633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blgkeuzxqlcgwrdrbgklexlmhumswcdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946937.2478147-575-166352976918991/AnsiballZ_systemd.py'
Feb 24 15:28:59 compute-0 sudo[98633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:00 compute-0 python3.9[98636]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:29:00 compute-0 systemd[1]: Reloading.
Feb 24 15:29:00 compute-0 systemd-rc-local-generator[98664]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:29:00 compute-0 systemd-sysv-generator[98671]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:29:00 compute-0 systemd[1]: Starting ovn_controller container...
Feb 24 15:29:00 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb 24 15:29:00 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:29:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd33ef4b744aeade6460890abf8fa754992253e57e1f004130a58c80a5d8ac4c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 24 15:29:00 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52.
Feb 24 15:29:00 compute-0 podman[98685]: 2026-02-24 15:29:00.622904924 +0000 UTC m=+0.140003366 container init 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:29:00 compute-0 ovn_controller[98701]: + sudo -E kolla_set_configs
Feb 24 15:29:00 compute-0 podman[98685]: 2026-02-24 15:29:00.649998808 +0000 UTC m=+0.167097240 container start 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 15:29:00 compute-0 edpm-start-podman-container[98685]: ovn_controller
Feb 24 15:29:00 compute-0 systemd[1]: Created slice User Slice of UID 0.
Feb 24 15:29:00 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 24 15:29:00 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 24 15:29:00 compute-0 edpm-start-podman-container[98684]: Creating additional drop-in dependency for "ovn_controller" (6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52)
Feb 24 15:29:00 compute-0 systemd[1]: Starting User Manager for UID 0...
Feb 24 15:29:00 compute-0 systemd[1]: Reloading.
Feb 24 15:29:00 compute-0 systemd[98741]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Feb 24 15:29:00 compute-0 podman[98707]: 2026-02-24 15:29:00.745874747 +0000 UTC m=+0.085497309 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 24 15:29:00 compute-0 systemd[98741]: Queued start job for default target Main User Target.
Feb 24 15:29:00 compute-0 systemd[98741]: Created slice User Application Slice.
Feb 24 15:29:00 compute-0 systemd[98741]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 24 15:29:00 compute-0 systemd[98741]: Started Daily Cleanup of User's Temporary Directories.
Feb 24 15:29:00 compute-0 systemd[98741]: Reached target Paths.
Feb 24 15:29:00 compute-0 systemd[98741]: Reached target Timers.
Feb 24 15:29:00 compute-0 systemd[98741]: Starting D-Bus User Message Bus Socket...
Feb 24 15:29:00 compute-0 systemd[98741]: Starting Create User's Volatile Files and Directories...
Feb 24 15:29:00 compute-0 systemd[98741]: Finished Create User's Volatile Files and Directories.
Feb 24 15:29:00 compute-0 systemd-sysv-generator[98787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:29:00 compute-0 systemd-rc-local-generator[98781]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:29:00 compute-0 systemd[98741]: Listening on D-Bus User Message Bus Socket.
Feb 24 15:29:00 compute-0 systemd[98741]: Reached target Sockets.
Feb 24 15:29:00 compute-0 systemd[98741]: Reached target Basic System.
Feb 24 15:29:00 compute-0 systemd[98741]: Reached target Main User Target.
Feb 24 15:29:00 compute-0 systemd[98741]: Startup finished in 97ms.
Feb 24 15:29:00 compute-0 systemd[1]: Started User Manager for UID 0.
Feb 24 15:29:00 compute-0 systemd[1]: Started ovn_controller container.
Feb 24 15:29:00 compute-0 systemd[1]: 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52-76b5f1ce8e67ef49.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:29:00 compute-0 systemd[1]: 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52-76b5f1ce8e67ef49.service: Failed with result 'exit-code'.
Feb 24 15:29:00 compute-0 systemd[1]: Started Session c1 of User root.
Feb 24 15:29:00 compute-0 sudo[98633]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:01 compute-0 ovn_controller[98701]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 24 15:29:01 compute-0 ovn_controller[98701]: INFO:__main__:Validating config file
Feb 24 15:29:01 compute-0 ovn_controller[98701]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 24 15:29:01 compute-0 ovn_controller[98701]: INFO:__main__:Writing out command to execute
Feb 24 15:29:01 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Feb 24 15:29:01 compute-0 ovn_controller[98701]: ++ cat /run_command
Feb 24 15:29:01 compute-0 ovn_controller[98701]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 24 15:29:01 compute-0 ovn_controller[98701]: + ARGS=
Feb 24 15:29:01 compute-0 ovn_controller[98701]: + sudo kolla_copy_cacerts
Feb 24 15:29:01 compute-0 systemd[1]: Started Session c2 of User root.
Feb 24 15:29:01 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Feb 24 15:29:01 compute-0 ovn_controller[98701]: + [[ ! -n '' ]]
Feb 24 15:29:01 compute-0 ovn_controller[98701]: + . kolla_extend_start
Feb 24 15:29:01 compute-0 ovn_controller[98701]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 24 15:29:01 compute-0 ovn_controller[98701]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb 24 15:29:01 compute-0 ovn_controller[98701]: + umask 0022
Feb 24 15:29:01 compute-0 ovn_controller[98701]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb 24 15:29:01 compute-0 NetworkManager[56995]: <info>  [1771946941.1501] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb 24 15:29:01 compute-0 NetworkManager[56995]: <info>  [1771946941.1515] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:29:01 compute-0 NetworkManager[56995]: <warn>  [1771946941.1518] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 24 15:29:01 compute-0 NetworkManager[56995]: <info>  [1771946941.1528] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Feb 24 15:29:01 compute-0 NetworkManager[56995]: <info>  [1771946941.1535] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Feb 24 15:29:01 compute-0 NetworkManager[56995]: <info>  [1771946941.1540] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 24 15:29:01 compute-0 kernel: br-int: entered promiscuous mode
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 24 15:29:01 compute-0 ovn_controller[98701]: 2026-02-24T15:29:01Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 24 15:29:01 compute-0 NetworkManager[56995]: <info>  [1771946941.1761] manager: (ovn-3ead5e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb 24 15:29:01 compute-0 systemd-udevd[98844]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:29:01 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Feb 24 15:29:01 compute-0 NetworkManager[56995]: <info>  [1771946941.2014] device (genev_sys_6081): carrier: link connected
Feb 24 15:29:01 compute-0 NetworkManager[56995]: <info>  [1771946941.2027] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Feb 24 15:29:01 compute-0 systemd-udevd[98862]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:29:01 compute-0 python3.9[98970]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:29:02 compute-0 sshd-session[98439]: Received disconnect from 120.48.56.86 port 36200:11:  [preauth]
Feb 24 15:29:02 compute-0 sshd-session[98439]: Disconnected from authenticating user root 120.48.56.86 port 36200 [preauth]
Feb 24 15:29:02 compute-0 sudo[99120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwwjjkiwfpivnjrxzftfgdevnsqteapo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946942.2686503-620-210515519871774/AnsiballZ_stat.py'
Feb 24 15:29:02 compute-0 sudo[99120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:02 compute-0 python3.9[99123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:02 compute-0 sudo[99120]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:03 compute-0 sudo[99244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oegjdllgymriqijytpewujbqtlkfhatu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946942.2686503-620-210515519871774/AnsiballZ_copy.py'
Feb 24 15:29:03 compute-0 sudo[99244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:03 compute-0 python3.9[99247]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946942.2686503-620-210515519871774/.source.yaml _original_basename=.s2m_2rs1 follow=False checksum=c79082f8697f5aacb8fc3dcaa282e98a05edfe8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:03 compute-0 sudo[99244]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:03 compute-0 sudo[99397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnnvbhvjaixvzcvekorottuhpayyplur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946943.4491477-635-230054208698094/AnsiballZ_command.py'
Feb 24 15:29:03 compute-0 sudo[99397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:03 compute-0 python3.9[99400]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:29:03 compute-0 ovs-vsctl[99401]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb 24 15:29:03 compute-0 sudo[99397]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:04 compute-0 sudo[99551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehbeqanaowgbzbeabezokzkmkbtlqfgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946944.0925517-643-240876681156820/AnsiballZ_command.py'
Feb 24 15:29:04 compute-0 sudo[99551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:04 compute-0 python3.9[99554]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:29:04 compute-0 ovs-vsctl[99556]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb 24 15:29:04 compute-0 sudo[99551]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:05 compute-0 sudo[99707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoootqfxnudilxskjonvldbmfijqmehm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946945.0084693-657-61392221732551/AnsiballZ_command.py'
Feb 24 15:29:05 compute-0 sudo[99707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:05 compute-0 python3.9[99710]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:29:05 compute-0 ovs-vsctl[99711]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb 24 15:29:05 compute-0 sudo[99707]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:05 compute-0 sshd-session[88123]: Connection closed by 192.168.122.30 port 32804
Feb 24 15:29:05 compute-0 sshd-session[88120]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:29:05 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Feb 24 15:29:05 compute-0 systemd[1]: session-19.scope: Consumed 42.111s CPU time.
Feb 24 15:29:05 compute-0 systemd-logind[813]: Session 19 logged out. Waiting for processes to exit.
Feb 24 15:29:05 compute-0 systemd-logind[813]: Removed session 19.
Feb 24 15:29:11 compute-0 systemd[1]: Stopping User Manager for UID 0...
Feb 24 15:29:11 compute-0 systemd[98741]: Activating special unit Exit the Session...
Feb 24 15:29:11 compute-0 systemd[98741]: Stopped target Main User Target.
Feb 24 15:29:11 compute-0 systemd[98741]: Stopped target Basic System.
Feb 24 15:29:11 compute-0 systemd[98741]: Stopped target Paths.
Feb 24 15:29:11 compute-0 systemd[98741]: Stopped target Sockets.
Feb 24 15:29:11 compute-0 systemd[98741]: Stopped target Timers.
Feb 24 15:29:11 compute-0 systemd[98741]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 24 15:29:11 compute-0 systemd[98741]: Closed D-Bus User Message Bus Socket.
Feb 24 15:29:11 compute-0 systemd[98741]: Stopped Create User's Volatile Files and Directories.
Feb 24 15:29:11 compute-0 systemd[98741]: Removed slice User Application Slice.
Feb 24 15:29:11 compute-0 systemd[98741]: Reached target Shutdown.
Feb 24 15:29:11 compute-0 systemd[98741]: Finished Exit the Session.
Feb 24 15:29:11 compute-0 systemd[98741]: Reached target Exit the Session.
Feb 24 15:29:11 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Feb 24 15:29:11 compute-0 systemd[1]: Stopped User Manager for UID 0.
Feb 24 15:29:11 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 24 15:29:11 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 24 15:29:11 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 24 15:29:11 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 24 15:29:11 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Feb 24 15:29:11 compute-0 sshd-session[99737]: Accepted publickey for zuul from 192.168.122.30 port 44952 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:29:11 compute-0 systemd-logind[813]: New session 21 of user zuul.
Feb 24 15:29:11 compute-0 systemd[1]: Started Session 21 of User zuul.
Feb 24 15:29:11 compute-0 sshd-session[99737]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:29:12 compute-0 python3.9[99891]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:29:13 compute-0 sudo[100045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdlkppazgvgmskvasxpimhgerkavxkcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946952.9568863-29-139758208512705/AnsiballZ_file.py'
Feb 24 15:29:13 compute-0 sudo[100045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:13 compute-0 python3.9[100048]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:13 compute-0 sudo[100045]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:13 compute-0 sudo[100198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpbnespnyxiozsgtixbasmchnahwujnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946953.732145-29-225851476784942/AnsiballZ_file.py'
Feb 24 15:29:13 compute-0 sudo[100198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:14 compute-0 python3.9[100201]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:14 compute-0 sudo[100198]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:14 compute-0 sudo[100351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgqfxzhctbvnuggfevnkthcnuqksowfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946954.3346128-29-263127728743/AnsiballZ_file.py'
Feb 24 15:29:14 compute-0 sudo[100351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:14 compute-0 python3.9[100354]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:14 compute-0 sudo[100351]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:15 compute-0 sudo[100504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwelfjcgrgaifqbogblumlechlnuqxul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946954.874356-29-238186384125658/AnsiballZ_file.py'
Feb 24 15:29:15 compute-0 sudo[100504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:15 compute-0 python3.9[100507]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:15 compute-0 sudo[100504]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:15 compute-0 sudo[100657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moiiiwwkgtnxgvifemgqbyxejtbetggy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946955.4745755-29-275257666055978/AnsiballZ_file.py'
Feb 24 15:29:15 compute-0 sudo[100657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:15 compute-0 python3.9[100660]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:15 compute-0 sudo[100657]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:16 compute-0 python3.9[100810]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:29:17 compute-0 sudo[100960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvxnnjccqnqwcjourkisnhrqhmywctwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946956.838702-73-52364851661831/AnsiballZ_seboolean.py'
Feb 24 15:29:17 compute-0 sudo[100960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:17 compute-0 python3.9[100963]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 24 15:29:18 compute-0 sudo[100960]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:18 compute-0 python3.9[101113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:19 compute-0 python3.9[101234]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946958.1683848-81-157369200446787/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:20 compute-0 python3.9[101385]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:20 compute-0 python3.9[101506]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946959.5422645-96-208481321162550/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:21 compute-0 sudo[101656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlittokwlsgsjilgunsrncvjmykagfja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946960.7969482-113-182056888311737/AnsiballZ_setup.py'
Feb 24 15:29:21 compute-0 sudo[101656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:21 compute-0 python3.9[101659]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:29:21 compute-0 sudo[101656]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:22 compute-0 sudo[101741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znoobnflotatlomkchtudhjooydpcufb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946960.7969482-113-182056888311737/AnsiballZ_dnf.py'
Feb 24 15:29:22 compute-0 sudo[101741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:22 compute-0 python3.9[101744]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:29:23 compute-0 sudo[101741]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:24 compute-0 sudo[101895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gikijjtvlksqqipcikyohedogzwxirpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946963.7734418-125-223667359609629/AnsiballZ_systemd.py'
Feb 24 15:29:24 compute-0 sudo[101895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:24 compute-0 python3.9[101898]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 24 15:29:24 compute-0 sudo[101895]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:25 compute-0 python3.9[102051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:25 compute-0 python3.9[102172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946964.9305277-133-138790347223101/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:26 compute-0 python3.9[102322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:27 compute-0 python3.9[102443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946966.0448349-133-275267233973458/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:28 compute-0 python3.9[102593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:28 compute-0 python3.9[102714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946967.8084688-177-241252731265484/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:29 compute-0 python3.9[102864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:29 compute-0 python3.9[102985]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946968.8678427-177-65353330562476/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:30 compute-0 python3.9[103135]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:29:30 compute-0 sudo[103287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njromucymjnlpeyqniaglxuknpcmruvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946970.620878-215-246710239838216/AnsiballZ_file.py'
Feb 24 15:29:30 compute-0 sudo[103287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:31 compute-0 python3.9[103290]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:31 compute-0 sudo[103287]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:31 compute-0 ovn_controller[98701]: 2026-02-24T15:29:31Z|00025|memory|INFO|17280 kB peak resident set size after 30.0 seconds
Feb 24 15:29:31 compute-0 ovn_controller[98701]: 2026-02-24T15:29:31Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb 24 15:29:31 compute-0 podman[103291]: 2026-02-24 15:29:31.112020345 +0000 UTC m=+0.072102758 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 15:29:31 compute-0 sudo[103466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xefebxcqnjsyxgppjblojyiykpoyfyqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946971.1965332-223-20273429212781/AnsiballZ_stat.py'
Feb 24 15:29:31 compute-0 sudo[103466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:31 compute-0 python3.9[103469]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:31 compute-0 sudo[103466]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:31 compute-0 sudo[103545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-algrwrtgqbjfidxceezdxyscspnoyoxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946971.1965332-223-20273429212781/AnsiballZ_file.py'
Feb 24 15:29:31 compute-0 sudo[103545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:32 compute-0 python3.9[103548]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:32 compute-0 sudo[103545]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:32 compute-0 sudo[103698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbgnezydwzphiztoifpgdvpicqeqyzxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946972.1960545-223-239184100624448/AnsiballZ_stat.py'
Feb 24 15:29:32 compute-0 sudo[103698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:32 compute-0 python3.9[103701]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:32 compute-0 sudo[103698]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:32 compute-0 sudo[103777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzufkbnszhhjdigmljibvmzncpvheehc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946972.1960545-223-239184100624448/AnsiballZ_file.py'
Feb 24 15:29:32 compute-0 sudo[103777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:33 compute-0 python3.9[103780]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:33 compute-0 sudo[103777]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:33 compute-0 sudo[103930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aobxthxnfqowvxeuummzilslaizcprbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946973.3053148-246-258189262206065/AnsiballZ_file.py'
Feb 24 15:29:33 compute-0 sudo[103930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:33 compute-0 python3.9[103933]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:33 compute-0 sudo[103930]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:34 compute-0 sudo[104083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovufzrbownjtaannysqvwyewxmtwuuzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946973.95672-254-254668100903939/AnsiballZ_stat.py'
Feb 24 15:29:34 compute-0 sudo[104083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:34 compute-0 python3.9[104086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:34 compute-0 sudo[104083]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:34 compute-0 sudo[104162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pckpxllrysyxwqenkmphzduydplyfykb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946973.95672-254-254668100903939/AnsiballZ_file.py'
Feb 24 15:29:34 compute-0 sudo[104162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:34 compute-0 python3.9[104165]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:34 compute-0 sudo[104162]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:35 compute-0 sudo[104315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gocsxhqrceqniifhmgeoqfelanxszsay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946975.0133865-266-25432991242267/AnsiballZ_stat.py'
Feb 24 15:29:35 compute-0 sudo[104315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:35 compute-0 python3.9[104318]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:35 compute-0 sudo[104315]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:35 compute-0 sudo[104394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyuhaxxangvijwmsbkmsftgbqufvkqgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946975.0133865-266-25432991242267/AnsiballZ_file.py'
Feb 24 15:29:35 compute-0 sudo[104394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:35 compute-0 python3.9[104397]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:35 compute-0 sudo[104394]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:36 compute-0 sudo[104547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvsyiujiwwnyfyavczutwxnxracsonow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946976.1432705-278-62089957183984/AnsiballZ_systemd.py'
Feb 24 15:29:36 compute-0 sudo[104547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:36 compute-0 python3.9[104550]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:29:36 compute-0 systemd[1]: Reloading.
Feb 24 15:29:36 compute-0 systemd-sysv-generator[104579]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:29:36 compute-0 systemd-rc-local-generator[104576]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:29:37 compute-0 sudo[104547]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:37 compute-0 sudo[104744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szmjndwtpmdbyrgxjbyopahuiwdlujku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946977.2049577-286-59712743841164/AnsiballZ_stat.py'
Feb 24 15:29:37 compute-0 sudo[104744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:37 compute-0 python3.9[104747]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:37 compute-0 sudo[104744]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:37 compute-0 sudo[104823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcpmqmewnpvjtwkjklqvknlxegfrjavr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946977.2049577-286-59712743841164/AnsiballZ_file.py'
Feb 24 15:29:37 compute-0 sudo[104823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:38 compute-0 python3.9[104826]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:38 compute-0 sudo[104823]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:38 compute-0 sudo[104976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypgafojcvsoatzursgrwylxrnhwlpwgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946978.390587-298-258653924585583/AnsiballZ_stat.py'
Feb 24 15:29:38 compute-0 sudo[104976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:38 compute-0 python3.9[104979]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:38 compute-0 sudo[104976]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:39 compute-0 sudo[105055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddboiewahvhjsdcrqgpeqxksknxxjtvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946978.390587-298-258653924585583/AnsiballZ_file.py'
Feb 24 15:29:39 compute-0 sudo[105055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:39 compute-0 python3.9[105058]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:39 compute-0 sudo[105055]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:39 compute-0 sudo[105208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pahthuncjclloklzrjwayimlougjtfpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946979.4270072-310-87373532462880/AnsiballZ_systemd.py'
Feb 24 15:29:39 compute-0 sudo[105208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:39 compute-0 python3.9[105211]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:29:40 compute-0 systemd[1]: Reloading.
Feb 24 15:29:40 compute-0 systemd-rc-local-generator[105241]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:29:40 compute-0 systemd-sysv-generator[105245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:29:40 compute-0 systemd[1]: Starting Create netns directory...
Feb 24 15:29:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 24 15:29:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 24 15:29:40 compute-0 systemd[1]: Finished Create netns directory.
Feb 24 15:29:40 compute-0 sudo[105208]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:40 compute-0 sudo[105408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlvzhcxevmkomsazkdtphqbviffsumlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946980.5290651-320-270505373746403/AnsiballZ_file.py'
Feb 24 15:29:40 compute-0 sudo[105408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:41 compute-0 python3.9[105411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:41 compute-0 sudo[105408]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:41 compute-0 sudo[105561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xydcrhebyuefpqctitnnkgskqolrpkfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946981.1980782-328-122614673517461/AnsiballZ_stat.py'
Feb 24 15:29:41 compute-0 sudo[105561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:41 compute-0 python3.9[105564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:41 compute-0 sudo[105561]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:41 compute-0 sudo[105685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogwimrppxnqnydyuyesvyxscdckyfaof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946981.1980782-328-122614673517461/AnsiballZ_copy.py'
Feb 24 15:29:41 compute-0 sudo[105685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:42 compute-0 python3.9[105688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771946981.1980782-328-122614673517461/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:42 compute-0 sudo[105685]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:42 compute-0 sudo[105838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cejyyvpcxnyjrbvofjeeomuebnocnylv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946982.5059114-345-267844122086074/AnsiballZ_file.py'
Feb 24 15:29:42 compute-0 sudo[105838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:42 compute-0 python3.9[105841]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:42 compute-0 sudo[105838]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:43 compute-0 sudo[105991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkxljtrfhddilybqgpjacolyhjatepeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946983.0718253-353-224620630550885/AnsiballZ_file.py'
Feb 24 15:29:43 compute-0 sudo[105991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:43 compute-0 python3.9[105994]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:29:43 compute-0 sudo[105991]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:43 compute-0 sudo[106144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogshknpwftjjgegktsqbjnjhqipkuvcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946983.6081505-361-277029291563343/AnsiballZ_stat.py'
Feb 24 15:29:43 compute-0 sudo[106144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:44 compute-0 python3.9[106147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:44 compute-0 sudo[106144]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:44 compute-0 sudo[106268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztpfjqrqsxoyzlcspsbqkjfcdilxfpal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946983.6081505-361-277029291563343/AnsiballZ_copy.py'
Feb 24 15:29:44 compute-0 sudo[106268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:44 compute-0 python3.9[106271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946983.6081505-361-277029291563343/.source.json _original_basename=.wij072wu follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:44 compute-0 sudo[106268]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:45 compute-0 python3.9[106421]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:46 compute-0 sudo[106842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsvycnejzwokzloafzpanduzuawhbmgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946986.5227432-401-99075849929774/AnsiballZ_container_config_data.py'
Feb 24 15:29:46 compute-0 sudo[106842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:47 compute-0 python3.9[106845]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb 24 15:29:47 compute-0 sudo[106842]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:47 compute-0 sudo[106995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpkgmgpdxmpvocmuzcvzcyhxijulklfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946987.4724932-412-67822560936893/AnsiballZ_container_config_hash.py'
Feb 24 15:29:47 compute-0 sudo[106995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:48 compute-0 python3.9[106998]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:29:48 compute-0 sudo[106995]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:48 compute-0 sudo[107148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krnjurwhydxoxtxfbdcmgrepjmfvkjda ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771946988.3612242-422-255233240995069/AnsiballZ_edpm_container_manage.py'
Feb 24 15:29:48 compute-0 sudo[107148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:49 compute-0 python3[107151]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:29:49 compute-0 podman[107188]: 2026-02-24 15:29:49.252325145 +0000 UTC m=+0.094979346 container create e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 24 15:29:49 compute-0 podman[107188]: 2026-02-24 15:29:49.17794328 +0000 UTC m=+0.020597491 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 15:29:49 compute-0 python3[107151]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 15:29:49 compute-0 sudo[107148]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:49 compute-0 sudo[107375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhloglxqxqwxrnvclstjstqfigdcriyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946989.5668762-430-43893486931756/AnsiballZ_stat.py'
Feb 24 15:29:49 compute-0 sudo[107375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:49 compute-0 python3.9[107378]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:29:50 compute-0 sudo[107375]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:50 compute-0 sudo[107530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqzecvjhykgoyxdebnltasdjnohayxjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946990.2336924-439-101923768043092/AnsiballZ_file.py'
Feb 24 15:29:50 compute-0 sudo[107530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:50 compute-0 python3.9[107533]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:50 compute-0 sudo[107530]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:50 compute-0 sudo[107607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kogksndhkcsijqnczrptpjaejuzcvvks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946990.2336924-439-101923768043092/AnsiballZ_stat.py'
Feb 24 15:29:50 compute-0 sudo[107607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:51 compute-0 python3.9[107610]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:29:51 compute-0 sudo[107607]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:51 compute-0 sudo[107759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuhkcjwgtplkpgsfnpggcujqixeeflbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946991.1279404-439-132954464729906/AnsiballZ_copy.py'
Feb 24 15:29:51 compute-0 sudo[107759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:51 compute-0 python3.9[107762]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771946991.1279404-439-132954464729906/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:51 compute-0 sudo[107759]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:51 compute-0 sudo[107836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbbsqahlzxlhattqihpvzcjqggnznzsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946991.1279404-439-132954464729906/AnsiballZ_systemd.py'
Feb 24 15:29:51 compute-0 sudo[107836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:52 compute-0 python3.9[107839]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:29:52 compute-0 systemd[1]: Reloading.
Feb 24 15:29:52 compute-0 systemd-rc-local-generator[107868]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:29:52 compute-0 systemd-sysv-generator[107871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:29:52 compute-0 sudo[107836]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:52 compute-0 sudo[107955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eozzkztoqfvgjumfctxwbsgbafbsihge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946991.1279404-439-132954464729906/AnsiballZ_systemd.py'
Feb 24 15:29:52 compute-0 sudo[107955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:53 compute-0 python3.9[107958]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:29:53 compute-0 systemd[1]: Reloading.
Feb 24 15:29:53 compute-0 systemd-sysv-generator[107986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:29:53 compute-0 systemd-rc-local-generator[107983]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:29:53 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Feb 24 15:29:53 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280d0a2d68f56d4d591467e5b400aa0378109c50d20720efa1556bebd8fc5e3f/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 24 15:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280d0a2d68f56d4d591467e5b400aa0378109c50d20720efa1556bebd8fc5e3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 15:29:53 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9.
Feb 24 15:29:53 compute-0 podman[108006]: 2026-02-24 15:29:53.461596023 +0000 UTC m=+0.176345202 container init e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: + sudo -E kolla_set_configs
Feb 24 15:29:53 compute-0 podman[108006]: 2026-02-24 15:29:53.500645369 +0000 UTC m=+0.215394498 container start e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Validating config file
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Copying service configuration files
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Writing out command to execute
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 24 15:29:53 compute-0 edpm-start-podman-container[108006]: ovn_metadata_agent
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: ++ cat /run_command
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: + CMD=neutron-ovn-metadata-agent
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: + ARGS=
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: + sudo kolla_copy_cacerts
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: + [[ ! -n '' ]]
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: + . kolla_extend_start
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: Running command: 'neutron-ovn-metadata-agent'
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: + umask 0022
Feb 24 15:29:53 compute-0 ovn_metadata_agent[108021]: + exec neutron-ovn-metadata-agent
Feb 24 15:29:53 compute-0 podman[108028]: 2026-02-24 15:29:53.575767683 +0000 UTC m=+0.068312190 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 24 15:29:53 compute-0 edpm-start-podman-container[108005]: Creating additional drop-in dependency for "ovn_metadata_agent" (e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9)
Feb 24 15:29:53 compute-0 systemd[1]: Reloading.
Feb 24 15:29:53 compute-0 systemd-rc-local-generator[108093]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:29:53 compute-0 systemd-sysv-generator[108097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:29:53 compute-0 systemd[1]: Started ovn_metadata_agent container.
Feb 24 15:29:53 compute-0 sudo[107955]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:54 compute-0 python3.9[108266]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:29:55 compute-0 sudo[108416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbybhbusimhhhzednbmusmgyqcbqraph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946995.038843-484-42050403666570/AnsiballZ_stat.py'
Feb 24 15:29:55 compute-0 sudo[108416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.360 108026 INFO neutron.common.config [-] Logging enabled!
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.360 108026 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.361 108026 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.361 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.361 108026 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.361 108026 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.361 108026 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.361 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.362 108026 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.363 108026 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.364 108026 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.368 108026 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.368 108026 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.368 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.368 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.368 108026 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.368 108026 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.368 108026 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.368 108026 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.369 108026 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.370 108026 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.371 108026 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.371 108026 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.371 108026 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.371 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.371 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.371 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.371 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.371 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.371 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.372 108026 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.373 108026 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.374 108026 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.374 108026 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.374 108026 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.374 108026 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.374 108026 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.374 108026 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.374 108026 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.374 108026 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.374 108026 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.375 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.375 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.375 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.375 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.376 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.376 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.376 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.376 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.376 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.376 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.376 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.376 108026 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.376 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.377 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.377 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.377 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.377 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.377 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.377 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.377 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.377 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.377 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.378 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.379 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.380 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.381 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.382 108026 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.383 108026 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.384 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.385 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.386 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.387 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.388 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.389 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.389 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.389 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.389 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.389 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.390 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.391 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.392 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.393 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.394 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.395 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.395 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.395 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.395 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.395 108026 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.395 108026 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.405 108026 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.406 108026 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.406 108026 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.406 108026 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.406 108026 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.417 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name ab329b13-e5ce-43e1-b513-c55bd650f251 (UUID: ab329b13-e5ce-43e1-b513-c55bd650f251) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.437 108026 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.437 108026 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.437 108026 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.437 108026 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.440 108026 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.445 108026 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.450 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'ab329b13-e5ce-43e1-b513-c55bd650f251'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], external_ids={}, name=ab329b13-e5ce-43e1-b513-c55bd650f251, nb_cfg_timestamp=1771946949175, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.450 108026 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f4cbd0e73a0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.451 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.451 108026 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.451 108026 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.452 108026 INFO oslo_service.service [-] Starting 1 workers
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.455 108026 DEBUG oslo_service.service [-] Started child 108420 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.458 108026 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpdjrhdvhv/privsep.sock']
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.459 108420 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-245169'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.492 108420 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.493 108420 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.493 108420 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.500 108420 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.506 108420 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 24 15:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.513 108420 INFO eventlet.wsgi.server [-] (108420) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Feb 24 15:29:55 compute-0 python3.9[108419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:29:55 compute-0 sudo[108416]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:55 compute-0 sudo[108547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixqzrioxjlvrmykyojteajfcvdigrhrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771946995.038843-484-42050403666570/AnsiballZ_copy.py'
Feb 24 15:29:55 compute-0 sudo[108547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:29:55 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:56.126 108026 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:56.128 108026 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpdjrhdvhv/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.989 108551 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.991 108551 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.995 108551 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:55.995 108551 INFO oslo.privsep.daemon [-] privsep daemon running as pid 108551
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:56.131 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[db7d8e7b-4723-4d85-a03b-09b8dbd4a052]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:29:56 compute-0 python3.9[108550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771946995.038843-484-42050403666570/.source.yaml _original_basename=.3w2hpc4b follow=False checksum=1ccbe235e101ba9186480dae9cacd9c82d783f22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:29:56 compute-0 sudo[108547]: pam_unix(sudo:session): session closed for user root
Feb 24 15:29:56 compute-0 sshd-session[99741]: Connection closed by 192.168.122.30 port 44952
Feb 24 15:29:56 compute-0 sshd-session[99737]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:29:56 compute-0 systemd-logind[813]: Session 21 logged out. Waiting for processes to exit.
Feb 24 15:29:56 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Feb 24 15:29:56 compute-0 systemd[1]: session-21.scope: Consumed 32.774s CPU time.
Feb 24 15:29:56 compute-0 systemd-logind[813]: Removed session 21.
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:56.613 108551 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:56.613 108551 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:29:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:56.613 108551 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:29:56 compute-0 sshd-session[108580]: Connection closed by authenticating user root 172.214.45.193 port 24584 [preauth]
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.104 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c1a85c-3e8b-4384-8654-3fbe16362481]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.109 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, column=external_ids, values=({'neutron:ovn-metadata-id': '697298c4-4d41-573e-9ba3-9cf4b744d9d8'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.120 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.128 108026 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.129 108026 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.129 108026 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.130 108026 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.130 108026 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.130 108026 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.131 108026 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.131 108026 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.131 108026 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.131 108026 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.131 108026 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.132 108026 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.132 108026 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.132 108026 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.132 108026 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.133 108026 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.133 108026 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.133 108026 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.133 108026 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.134 108026 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.134 108026 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.134 108026 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.134 108026 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.134 108026 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.135 108026 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.135 108026 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.135 108026 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.136 108026 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.136 108026 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.136 108026 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.136 108026 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.136 108026 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.137 108026 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.137 108026 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.137 108026 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.137 108026 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.138 108026 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.138 108026 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.138 108026 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.138 108026 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.139 108026 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.139 108026 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.139 108026 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.139 108026 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.140 108026 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.140 108026 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.140 108026 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.140 108026 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.140 108026 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.141 108026 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.141 108026 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.141 108026 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.141 108026 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.141 108026 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.142 108026 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.142 108026 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.142 108026 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.142 108026 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.142 108026 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.143 108026 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.143 108026 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.143 108026 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.143 108026 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.144 108026 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.144 108026 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.144 108026 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.144 108026 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.145 108026 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.145 108026 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.145 108026 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.145 108026 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.145 108026 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.146 108026 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.146 108026 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.146 108026 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.146 108026 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.146 108026 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.147 108026 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.148 108026 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.148 108026 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.148 108026 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.148 108026 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.148 108026 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.148 108026 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.148 108026 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.148 108026 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.148 108026 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.149 108026 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.150 108026 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.150 108026 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.150 108026 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.150 108026 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.150 108026 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.150 108026 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.150 108026 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.150 108026 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.150 108026 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.151 108026 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.151 108026 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.151 108026 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.151 108026 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.151 108026 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.151 108026 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.151 108026 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.151 108026 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.151 108026 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.152 108026 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.152 108026 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.152 108026 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.152 108026 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.152 108026 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.152 108026 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.152 108026 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.152 108026 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.153 108026 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.154 108026 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.154 108026 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.154 108026 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.154 108026 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.154 108026 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.154 108026 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.154 108026 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.154 108026 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.154 108026 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.155 108026 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.156 108026 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.157 108026 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.158 108026 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.158 108026 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.158 108026 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.158 108026 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.158 108026 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.158 108026 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.158 108026 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.158 108026 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.158 108026 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.159 108026 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.160 108026 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.161 108026 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.162 108026 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.163 108026 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.164 108026 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.164 108026 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.164 108026 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.164 108026 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.164 108026 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.164 108026 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.164 108026 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.164 108026 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.164 108026 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.165 108026 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.166 108026 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.167 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.168 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.168 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.168 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.168 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.168 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.168 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.168 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.168 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.168 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.169 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.170 108026 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.171 108026 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:29:57 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:29:57.171 108026 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 24 15:30:01 compute-0 sshd-session[108582]: Accepted publickey for zuul from 192.168.122.30 port 49436 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:30:01 compute-0 systemd-logind[813]: New session 22 of user zuul.
Feb 24 15:30:01 compute-0 systemd[1]: Started Session 22 of User zuul.
Feb 24 15:30:01 compute-0 sshd-session[108582]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:30:02 compute-0 podman[108584]: 2026-02-24 15:30:02.009245464 +0000 UTC m=+0.085801981 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Feb 24 15:30:02 compute-0 python3.9[108759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:30:04 compute-0 sudo[108913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnbhdquhlmhjdcsaldtfhvnrttpfvqzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947003.6115174-29-264398693050782/AnsiballZ_command.py'
Feb 24 15:30:04 compute-0 sudo[108913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:04 compute-0 python3.9[108916]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:30:04 compute-0 sudo[108913]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:05 compute-0 sudo[109079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udqcvyecpmoejbotqlhkpditpmgqxoby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947004.600374-40-221969826370033/AnsiballZ_systemd_service.py'
Feb 24 15:30:05 compute-0 sudo[109079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:05 compute-0 python3.9[109082]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:30:05 compute-0 systemd[1]: Reloading.
Feb 24 15:30:05 compute-0 systemd-rc-local-generator[109113]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:30:05 compute-0 systemd-sysv-generator[109121]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:30:05 compute-0 sudo[109079]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:06 compute-0 python3.9[109274]: ansible-ansible.builtin.service_facts Invoked
Feb 24 15:30:06 compute-0 network[109291]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 24 15:30:06 compute-0 network[109292]: 'network-scripts' will be removed from distribution in near future.
Feb 24 15:30:06 compute-0 network[109293]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 24 15:30:08 compute-0 sudo[109553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zakzuvsesdlzguvulxkpaycnokmykcbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947008.4466743-59-24132975141987/AnsiballZ_systemd_service.py'
Feb 24 15:30:08 compute-0 sudo[109553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:08 compute-0 python3.9[109556]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:30:08 compute-0 sudo[109553]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:09 compute-0 sudo[109707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alyauuylaynbmgyabiqathfrxemuorqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947009.0818918-59-24975546015810/AnsiballZ_systemd_service.py'
Feb 24 15:30:09 compute-0 sudo[109707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:09 compute-0 python3.9[109710]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:30:09 compute-0 sudo[109707]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:10 compute-0 sudo[109861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eabesrtakixdqxlmyulfgxhsayboecfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947009.795539-59-19447395888750/AnsiballZ_systemd_service.py'
Feb 24 15:30:10 compute-0 sudo[109861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:10 compute-0 python3.9[109864]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:30:10 compute-0 sudo[109861]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:10 compute-0 sudo[110015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmxqnueoqrprmptngvmzihzdewpovwqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947010.4964368-59-249159714216046/AnsiballZ_systemd_service.py'
Feb 24 15:30:10 compute-0 sudo[110015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:11 compute-0 python3.9[110018]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:30:11 compute-0 sudo[110015]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:11 compute-0 sudo[110169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjabnphbxmeiqnohimxikptfkvofttmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947011.182866-59-137166572905498/AnsiballZ_systemd_service.py'
Feb 24 15:30:11 compute-0 sudo[110169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:11 compute-0 python3.9[110172]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:30:11 compute-0 sudo[110169]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:12 compute-0 sudo[110323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dopgsdhqngettdtlzxqsaztlvrsbkrif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947011.8458655-59-171495515540144/AnsiballZ_systemd_service.py'
Feb 24 15:30:12 compute-0 sudo[110323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:12 compute-0 python3.9[110326]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:30:12 compute-0 sudo[110323]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:12 compute-0 sudo[110477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzguktfjcczegkodwnaqpkhwzfnvbygk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947012.5684683-59-143363098067857/AnsiballZ_systemd_service.py'
Feb 24 15:30:12 compute-0 sudo[110477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:13 compute-0 python3.9[110480]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:30:13 compute-0 sudo[110477]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:13 compute-0 sudo[110631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvdewrfpqxsczeccfpxyjegiwyzuyahs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947013.4472218-111-43695101803839/AnsiballZ_file.py'
Feb 24 15:30:13 compute-0 sudo[110631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:14 compute-0 python3.9[110634]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:14 compute-0 sudo[110631]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:14 compute-0 sudo[110784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyjpfyxlctmortsceznsigfxqhujjjfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947014.168158-111-143427502671578/AnsiballZ_file.py'
Feb 24 15:30:14 compute-0 sudo[110784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:14 compute-0 python3.9[110787]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:14 compute-0 sudo[110784]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:14 compute-0 sudo[110937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfziaygcfodowesfaraoatvrwzkfkxds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947014.6919653-111-90131100139447/AnsiballZ_file.py'
Feb 24 15:30:14 compute-0 sudo[110937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:15 compute-0 python3.9[110940]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:15 compute-0 sudo[110937]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:15 compute-0 sudo[111090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukhlvyhaidnjjbmmlgujgkpgycjgjrpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947015.3060517-111-241552421671874/AnsiballZ_file.py'
Feb 24 15:30:15 compute-0 sudo[111090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:15 compute-0 python3.9[111093]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:15 compute-0 sudo[111090]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:16 compute-0 sudo[111243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rteqtqworkffbrwtqxlircskaugjplyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947015.8827767-111-270266065008028/AnsiballZ_file.py'
Feb 24 15:30:16 compute-0 sudo[111243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:16 compute-0 python3.9[111246]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:16 compute-0 sudo[111243]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:16 compute-0 sudo[111396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkbqpwvrsbmfrxoccxouwygeajkceiaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947016.4349508-111-202354738580880/AnsiballZ_file.py'
Feb 24 15:30:16 compute-0 sudo[111396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:16 compute-0 python3.9[111399]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:16 compute-0 sudo[111396]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:17 compute-0 sudo[111549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbncnbebjhbzaiiriqnmguzxkwqslcpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947017.0478835-111-22930283196678/AnsiballZ_file.py'
Feb 24 15:30:17 compute-0 sudo[111549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:17 compute-0 python3.9[111552]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:17 compute-0 sudo[111549]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:17 compute-0 sudo[111702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brbicazpcruetmlrgokgcwaswtrycxsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947017.6564305-161-250636797178270/AnsiballZ_file.py'
Feb 24 15:30:17 compute-0 sudo[111702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:18 compute-0 python3.9[111705]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:18 compute-0 sudo[111702]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:18 compute-0 sudo[111855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpheydcpxgmkieygotfgmwouznndrtuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947018.2453728-161-163595554684142/AnsiballZ_file.py'
Feb 24 15:30:18 compute-0 sudo[111855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:18 compute-0 python3.9[111858]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:18 compute-0 sudo[111855]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:19 compute-0 sudo[112008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayrggguuxdgcolabqzqmwoezfupckctr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947018.7928278-161-244480807871223/AnsiballZ_file.py'
Feb 24 15:30:19 compute-0 sudo[112008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:19 compute-0 python3.9[112011]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:19 compute-0 sudo[112008]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:19 compute-0 sudo[112161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reznqwivraktqazhgymxlyvsjfaagqnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947019.3541524-161-26824229334404/AnsiballZ_file.py'
Feb 24 15:30:19 compute-0 sudo[112161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:19 compute-0 python3.9[112164]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:19 compute-0 sudo[112161]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:20 compute-0 sudo[112314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtmcbtmiqkvnojrrvlakzajeulsrrzcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947019.9229112-161-121938743393770/AnsiballZ_file.py'
Feb 24 15:30:20 compute-0 sudo[112314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:20 compute-0 python3.9[112317]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:20 compute-0 sudo[112314]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:20 compute-0 sudo[112467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uazvhanjkzbitmtkucuwbfcrdesjtozr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947020.5193236-161-57941982633622/AnsiballZ_file.py'
Feb 24 15:30:20 compute-0 sudo[112467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:20 compute-0 python3.9[112470]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:20 compute-0 sudo[112467]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:21 compute-0 sudo[112620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biqgckvwszdsstcfzrvraapzooqqjbbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947021.08928-161-119155949012375/AnsiballZ_file.py'
Feb 24 15:30:21 compute-0 sudo[112620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:21 compute-0 python3.9[112623]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:30:21 compute-0 sudo[112620]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:22 compute-0 sudo[112773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qskbmxozvmyyscjeunosjytlbxhhjsax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947021.760037-212-183475260266447/AnsiballZ_command.py'
Feb 24 15:30:22 compute-0 sudo[112773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:22 compute-0 python3.9[112776]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:30:22 compute-0 sudo[112773]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:22 compute-0 python3.9[112928]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 24 15:30:23 compute-0 sudo[113078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etwqnfvpimqrafxcihpzsijhffqzibbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947023.2301157-230-108069777666390/AnsiballZ_systemd_service.py'
Feb 24 15:30:23 compute-0 sudo[113078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:23 compute-0 python3.9[113081]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:30:23 compute-0 systemd[1]: Reloading.
Feb 24 15:30:23 compute-0 systemd-rc-local-generator[113129]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:30:23 compute-0 systemd-sysv-generator[113136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:30:23 compute-0 podman[113083]: 2026-02-24 15:30:23.870226138 +0000 UTC m=+0.083605104 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:30:24 compute-0 sudo[113078]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:24 compute-0 sudo[113292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkyqihosmybrieatxudiephjoasyrcbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947024.1383555-238-247041005763679/AnsiballZ_command.py'
Feb 24 15:30:24 compute-0 sudo[113292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:24 compute-0 python3.9[113295]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:30:24 compute-0 sudo[113292]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:24 compute-0 sudo[113446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajkjsfefacfdqkzcktxgcxjntkhtgcru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947024.697882-238-237028062254948/AnsiballZ_command.py'
Feb 24 15:30:24 compute-0 sudo[113446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:25 compute-0 python3.9[113449]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:30:25 compute-0 sudo[113446]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:25 compute-0 sudo[113602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztaceodpdvdixytpblwemsqkwdqbkkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947025.237876-238-255740292261217/AnsiballZ_command.py'
Feb 24 15:30:25 compute-0 sudo[113602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:25 compute-0 python3.9[113605]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:30:25 compute-0 sudo[113602]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:25 compute-0 sudo[113756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptrzkwxtksbjqpctfcvvtmanhtvwtima ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947025.7572083-238-158008386466741/AnsiballZ_command.py'
Feb 24 15:30:25 compute-0 sudo[113756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:26 compute-0 python3.9[113759]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:30:26 compute-0 sudo[113756]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:26 compute-0 sudo[113910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oobmpsbqysysrplbeefayyqkdvtshmyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947026.2750206-238-87384802765655/AnsiballZ_command.py'
Feb 24 15:30:26 compute-0 sudo[113910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:26 compute-0 python3.9[113913]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:30:26 compute-0 sudo[113910]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:27 compute-0 sudo[114064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vehafblzbwdyitijlotkehlnhgoknbyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947026.868488-238-174720463787002/AnsiballZ_command.py'
Feb 24 15:30:27 compute-0 sudo[114064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:27 compute-0 python3.9[114067]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:30:27 compute-0 sudo[114064]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:27 compute-0 sudo[114218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzwhnttfauoebjohrmhhghhkrejhxxfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947027.3741164-238-174572962568119/AnsiballZ_command.py'
Feb 24 15:30:27 compute-0 sudo[114218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:27 compute-0 python3.9[114221]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:30:27 compute-0 sudo[114218]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:27 compute-0 sshd-session[113527]: Invalid user ubuntu from 120.48.56.86 port 54938
Feb 24 15:30:28 compute-0 sshd-session[113527]: Received disconnect from 120.48.56.86 port 54938:11:  [preauth]
Feb 24 15:30:28 compute-0 sshd-session[113527]: Disconnected from invalid user ubuntu 120.48.56.86 port 54938 [preauth]
Feb 24 15:30:28 compute-0 sudo[114372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbbmqrmsabpkeajjucdywdncmnmsqjib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947028.1844807-292-123463209258113/AnsiballZ_getent.py'
Feb 24 15:30:28 compute-0 sudo[114372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:28 compute-0 python3.9[114375]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb 24 15:30:28 compute-0 sudo[114372]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:29 compute-0 sudo[114526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztslfdjnpgsmixjfoakfnobyyqygueqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947028.9631925-300-49888763325745/AnsiballZ_group.py'
Feb 24 15:30:29 compute-0 sudo[114526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:29 compute-0 python3.9[114529]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 24 15:30:29 compute-0 groupadd[114530]: group added to /etc/group: name=libvirt, GID=42473
Feb 24 15:30:29 compute-0 groupadd[114530]: group added to /etc/gshadow: name=libvirt
Feb 24 15:30:29 compute-0 groupadd[114530]: new group: name=libvirt, GID=42473
Feb 24 15:30:29 compute-0 sudo[114526]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:30 compute-0 sudo[114685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyebnwjfefwcswenzpuqvxdxwmrleiyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947029.8089695-308-36618968380052/AnsiballZ_user.py'
Feb 24 15:30:30 compute-0 sudo[114685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:30 compute-0 python3.9[114688]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 24 15:30:30 compute-0 useradd[114690]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/1
Feb 24 15:30:30 compute-0 sudo[114685]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:31 compute-0 sudo[114846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yesbpmvszpruriunjennxineojymtlso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947030.984749-319-88644302875257/AnsiballZ_setup.py'
Feb 24 15:30:31 compute-0 sudo[114846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:31 compute-0 python3.9[114849]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:30:31 compute-0 sudo[114846]: pam_unix(sudo:session): session closed for user root
Feb 24 15:30:32 compute-0 sudo[114955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jobkiypgptdlyrfkntnwscdywtlggqce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947030.984749-319-88644302875257/AnsiballZ_dnf.py'
Feb 24 15:30:32 compute-0 podman[114881]: 2026-02-24 15:30:32.147845385 +0000 UTC m=+0.100039801 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 24 15:30:32 compute-0 sudo[114955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:30:32 compute-0 python3.9[114960]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:30:54 compute-0 podman[115149]: 2026-02-24 15:30:54.122060706 +0000 UTC m=+0.075645102 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:30:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:30:55.407 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:30:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:30:55.409 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:30:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:30:55.409 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:30:57 compute-0 kernel: SELinux:  Converting 2766 SID table entries...
Feb 24 15:30:57 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 24 15:30:57 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 24 15:30:57 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 24 15:30:57 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 24 15:30:57 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 24 15:30:57 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 24 15:30:57 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 24 15:31:02 compute-0 anacron[30515]: Job `cron.daily' started
Feb 24 15:31:02 compute-0 anacron[30515]: Job `cron.daily' terminated
Feb 24 15:31:03 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Feb 24 15:31:03 compute-0 podman[115178]: 2026-02-24 15:31:03.176190549 +0000 UTC m=+0.118802598 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 15:31:07 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Feb 24 15:31:07 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 24 15:31:07 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 24 15:31:07 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 24 15:31:07 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 24 15:31:07 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 24 15:31:07 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 24 15:31:07 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 24 15:31:25 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb 24 15:31:25 compute-0 podman[119376]: 2026-02-24 15:31:25.140742861 +0000 UTC m=+0.069550583 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:31:34 compute-0 podman[125452]: 2026-02-24 15:31:34.146577005 +0000 UTC m=+0.109528320 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 15:31:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:31:55.409 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:31:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:31:55.409 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:31:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:31:55.409 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:31:55 compute-0 sshd-session[132162]: Invalid user abc from 120.48.56.86 port 51036
Feb 24 15:31:55 compute-0 podman[132168]: 2026-02-24 15:31:55.912443062 +0000 UTC m=+0.078856022 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 24 15:31:56 compute-0 sshd-session[132162]: Received disconnect from 120.48.56.86 port 51036:11:  [preauth]
Feb 24 15:31:56 compute-0 sshd-session[132162]: Disconnected from invalid user abc 120.48.56.86 port 51036 [preauth]
Feb 24 15:31:56 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Feb 24 15:31:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 24 15:31:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 24 15:31:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 24 15:31:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 24 15:31:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 24 15:31:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 24 15:31:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 24 15:31:57 compute-0 groupadd[132196]: group added to /etc/group: name=dnsmasq, GID=993
Feb 24 15:31:57 compute-0 groupadd[132196]: group added to /etc/gshadow: name=dnsmasq
Feb 24 15:31:57 compute-0 groupadd[132196]: new group: name=dnsmasq, GID=993
Feb 24 15:31:57 compute-0 useradd[132203]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Feb 24 15:31:57 compute-0 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Feb 24 15:31:57 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb 24 15:31:57 compute-0 dbus-broker-launch[787]: Noticed file-system modification, trigger reload.
Feb 24 15:31:58 compute-0 groupadd[132216]: group added to /etc/group: name=clevis, GID=992
Feb 24 15:31:58 compute-0 groupadd[132216]: group added to /etc/gshadow: name=clevis
Feb 24 15:31:58 compute-0 groupadd[132216]: new group: name=clevis, GID=992
Feb 24 15:31:58 compute-0 useradd[132223]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Feb 24 15:31:58 compute-0 usermod[132233]: add 'clevis' to group 'tss'
Feb 24 15:31:58 compute-0 usermod[132233]: add 'clevis' to shadow group 'tss'
Feb 24 15:32:00 compute-0 polkitd[44319]: Reloading rules
Feb 24 15:32:00 compute-0 polkitd[44319]: Collecting garbage unconditionally...
Feb 24 15:32:00 compute-0 polkitd[44319]: Loading rules from directory /etc/polkit-1/rules.d
Feb 24 15:32:00 compute-0 polkitd[44319]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 24 15:32:00 compute-0 polkitd[44319]: Finished loading, compiling and executing 3 rules
Feb 24 15:32:00 compute-0 polkitd[44319]: Reloading rules
Feb 24 15:32:00 compute-0 polkitd[44319]: Collecting garbage unconditionally...
Feb 24 15:32:00 compute-0 polkitd[44319]: Loading rules from directory /etc/polkit-1/rules.d
Feb 24 15:32:00 compute-0 polkitd[44319]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 24 15:32:00 compute-0 polkitd[44319]: Finished loading, compiling and executing 3 rules
Feb 24 15:32:01 compute-0 groupadd[132423]: group added to /etc/group: name=ceph, GID=167
Feb 24 15:32:01 compute-0 groupadd[132423]: group added to /etc/gshadow: name=ceph
Feb 24 15:32:01 compute-0 groupadd[132423]: new group: name=ceph, GID=167
Feb 24 15:32:01 compute-0 useradd[132429]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Feb 24 15:32:04 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Feb 24 15:32:04 compute-0 sshd[1019]: Received signal 15; terminating.
Feb 24 15:32:04 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Feb 24 15:32:04 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Feb 24 15:32:04 compute-0 systemd[1]: sshd.service: Consumed 1.687s CPU time, read 564.0K from disk, written 16.0K to disk.
Feb 24 15:32:04 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Feb 24 15:32:04 compute-0 systemd[1]: Stopping sshd-keygen.target...
Feb 24 15:32:04 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 24 15:32:04 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 24 15:32:04 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 24 15:32:04 compute-0 systemd[1]: Reached target sshd-keygen.target.
Feb 24 15:32:04 compute-0 systemd[1]: Starting OpenSSH server daemon...
Feb 24 15:32:04 compute-0 sshd[132949]: Server listening on 0.0.0.0 port 22.
Feb 24 15:32:04 compute-0 sshd[132949]: Server listening on :: port 22.
Feb 24 15:32:04 compute-0 systemd[1]: Started OpenSSH server daemon.
Feb 24 15:32:04 compute-0 podman[132948]: 2026-02-24 15:32:04.345812343 +0000 UTC m=+0.143545281 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 15:32:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 24 15:32:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 24 15:32:06 compute-0 systemd[1]: Reloading.
Feb 24 15:32:06 compute-0 systemd-rc-local-generator[133231]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:06 compute-0 systemd-sysv-generator[133237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 24 15:32:08 compute-0 sudo[114955]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:09 compute-0 sudo[137383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmwbpnawxqctsavkjaekwxkwawtsmpcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947128.7621365-331-189119774646917/AnsiballZ_systemd.py'
Feb 24 15:32:09 compute-0 sudo[137383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:09 compute-0 python3.9[137417]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 24 15:32:09 compute-0 systemd[1]: Reloading.
Feb 24 15:32:09 compute-0 systemd-sysv-generator[138059]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:09 compute-0 systemd-rc-local-generator[138054]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:10 compute-0 sudo[137383]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:10 compute-0 sudo[138904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjgofchxxcjvcepshdofvgjadhcghmfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947130.1872647-331-133053814616946/AnsiballZ_systemd.py'
Feb 24 15:32:10 compute-0 sudo[138904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:10 compute-0 python3.9[138932]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 24 15:32:10 compute-0 systemd[1]: Reloading.
Feb 24 15:32:10 compute-0 systemd-rc-local-generator[139391]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:11 compute-0 systemd-sysv-generator[139395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:11 compute-0 sudo[138904]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:11 compute-0 sudo[140249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftsmvnjtknaxpbbdmsnfwgghjfuidcba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947131.327061-331-190171520701935/AnsiballZ_systemd.py'
Feb 24 15:32:11 compute-0 sudo[140249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:11 compute-0 python3.9[140275]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 24 15:32:11 compute-0 systemd[1]: Reloading.
Feb 24 15:32:12 compute-0 systemd-rc-local-generator[140799]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:12 compute-0 systemd-sysv-generator[140806]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:12 compute-0 sudo[140249]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:12 compute-0 sudo[141631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rygdlsrynguceivemeipudygluqgoyuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947132.3300905-331-184374749386677/AnsiballZ_systemd.py'
Feb 24 15:32:12 compute-0 sudo[141631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:12 compute-0 python3.9[141657]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 24 15:32:12 compute-0 systemd[1]: Reloading.
Feb 24 15:32:13 compute-0 systemd-rc-local-generator[142087]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:13 compute-0 systemd-sysv-generator[142096]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:13 compute-0 sudo[141631]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 24 15:32:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 24 15:32:13 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.577s CPU time.
Feb 24 15:32:13 compute-0 systemd[1]: run-re31933b3898642babc3659bed1b94166.service: Deactivated successfully.
Feb 24 15:32:13 compute-0 sudo[142582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzsiqkyudupjkwuhxrlxaevztugegroa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947133.4411948-360-212844726443580/AnsiballZ_systemd.py'
Feb 24 15:32:13 compute-0 sudo[142582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:14 compute-0 python3.9[142585]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:14 compute-0 systemd[1]: Reloading.
Feb 24 15:32:14 compute-0 systemd-rc-local-generator[142613]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:14 compute-0 systemd-sysv-generator[142616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:14 compute-0 sudo[142582]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:15 compute-0 sudo[142780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmsamxkqwkpqtpefyhblfxndnsbgbxbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947134.6205397-360-6512367612327/AnsiballZ_systemd.py'
Feb 24 15:32:15 compute-0 sudo[142780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:15 compute-0 python3.9[142783]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:15 compute-0 systemd[1]: Reloading.
Feb 24 15:32:15 compute-0 systemd-sysv-generator[142812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:15 compute-0 systemd-rc-local-generator[142808]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:15 compute-0 sudo[142780]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:16 compute-0 sudo[142978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atuuenakgvsakzmxlernydzfieweqocz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947135.7969298-360-69338439366193/AnsiballZ_systemd.py'
Feb 24 15:32:16 compute-0 sudo[142978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:16 compute-0 python3.9[142981]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:16 compute-0 systemd[1]: Reloading.
Feb 24 15:32:16 compute-0 systemd-rc-local-generator[143007]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:16 compute-0 systemd-sysv-generator[143010]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:16 compute-0 sudo[142978]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:17 compute-0 sudo[143176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwjqoyvaoiqjzunavfdqzypafohxpmhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947136.8981953-360-266202881449466/AnsiballZ_systemd.py'
Feb 24 15:32:17 compute-0 sudo[143176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:17 compute-0 python3.9[143179]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:17 compute-0 sudo[143176]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:17 compute-0 sudo[143332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppsmprqbtrmsrpilsjmoodeuokhqyjae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947137.6243188-360-121550650602789/AnsiballZ_systemd.py'
Feb 24 15:32:17 compute-0 sudo[143332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:18 compute-0 python3.9[143335]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:18 compute-0 systemd[1]: Reloading.
Feb 24 15:32:18 compute-0 systemd-rc-local-generator[143367]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:18 compute-0 systemd-sysv-generator[143372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:18 compute-0 sudo[143332]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:18 compute-0 sudo[143530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yufczkqxikjgfnqkzkjtznhxevxeqths ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947138.6971934-396-112782289527450/AnsiballZ_systemd.py'
Feb 24 15:32:18 compute-0 sudo[143530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:19 compute-0 python3.9[143533]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 24 15:32:19 compute-0 systemd[1]: Reloading.
Feb 24 15:32:19 compute-0 systemd-rc-local-generator[143565]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:32:19 compute-0 systemd-sysv-generator[143568]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:32:19 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Feb 24 15:32:19 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb 24 15:32:19 compute-0 sudo[143530]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:20 compute-0 sudo[143731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pobphrpuxxcyvttnqonqxaogahyuhtox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947139.884927-404-87551528192929/AnsiballZ_systemd.py'
Feb 24 15:32:20 compute-0 sudo[143731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:20 compute-0 python3.9[143734]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:20 compute-0 sudo[143731]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:21 compute-0 sudo[143887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpcyiorkdpuvmglowvuigixciipczfwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947140.7700803-404-206124193799146/AnsiballZ_systemd.py'
Feb 24 15:32:21 compute-0 sudo[143887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:21 compute-0 python3.9[143890]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:21 compute-0 sudo[143887]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:21 compute-0 sudo[144043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkjmcuguepmmfpkcclxygkckbldhxzva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947141.729383-404-236375985030521/AnsiballZ_systemd.py'
Feb 24 15:32:21 compute-0 sudo[144043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:22 compute-0 python3.9[144046]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:22 compute-0 sudo[144043]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:22 compute-0 sudo[144199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxhnfxonghxvyjujxdsdotjlccpxnix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947142.4612505-404-168671492003726/AnsiballZ_systemd.py'
Feb 24 15:32:22 compute-0 sudo[144199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:23 compute-0 python3.9[144202]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:23 compute-0 sudo[144199]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:23 compute-0 sudo[144355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvfqmcoulmcjlyqsqhvqfxkjnaysipji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947143.212471-404-184199637684852/AnsiballZ_systemd.py'
Feb 24 15:32:23 compute-0 sudo[144355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:23 compute-0 python3.9[144358]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:23 compute-0 sudo[144355]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:24 compute-0 sudo[144511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-schkrkmdcaxdjfrjgdzbfmhaxuqfwmcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947144.0206416-404-101604305520378/AnsiballZ_systemd.py'
Feb 24 15:32:24 compute-0 sudo[144511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:24 compute-0 python3.9[144514]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:24 compute-0 sudo[144511]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:25 compute-0 sudo[144667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikwrtsnnfgwolaannmdgwuweigjkodhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947144.8052104-404-257794325441976/AnsiballZ_systemd.py'
Feb 24 15:32:25 compute-0 sudo[144667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:25 compute-0 python3.9[144670]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:25 compute-0 sudo[144667]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:26 compute-0 sudo[144835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilygbxvgxlmrlwyipgzsozqmfwqxbkoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947145.7043695-404-228062709693979/AnsiballZ_systemd.py'
Feb 24 15:32:26 compute-0 sudo[144835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:26 compute-0 podman[144797]: 2026-02-24 15:32:26.121187357 +0000 UTC m=+0.086522413 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Feb 24 15:32:26 compute-0 python3.9[144843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:26 compute-0 sudo[144835]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:26 compute-0 sudo[145001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnppgjmzzaqrwdpgghxlmhpessbgbvwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947146.5920753-404-267989335807258/AnsiballZ_systemd.py'
Feb 24 15:32:26 compute-0 sudo[145001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:27 compute-0 python3.9[145004]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:27 compute-0 sudo[145001]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:27 compute-0 sudo[145157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxxkgktyjovzkhoveafgqsjlpnbgywwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947147.355128-404-84697686519813/AnsiballZ_systemd.py'
Feb 24 15:32:27 compute-0 sudo[145157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:27 compute-0 python3.9[145160]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:28 compute-0 sudo[145157]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:28 compute-0 sudo[145313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttnwxxsdwjzpjiobudjxhuplnknqxtlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947148.1487408-404-36988183736594/AnsiballZ_systemd.py'
Feb 24 15:32:28 compute-0 sudo[145313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:28 compute-0 python3.9[145316]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:28 compute-0 sudo[145313]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:29 compute-0 sudo[145469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhvoicgmsnqeizniuhahwujhqfdajoif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947148.9951973-404-162001423312133/AnsiballZ_systemd.py'
Feb 24 15:32:29 compute-0 sudo[145469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:29 compute-0 python3.9[145472]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:29 compute-0 sudo[145469]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:30 compute-0 sudo[145625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owrmagovjueyevcjwhgshrrhhbnrthmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947149.8751967-404-48417972396899/AnsiballZ_systemd.py'
Feb 24 15:32:30 compute-0 sudo[145625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:30 compute-0 python3.9[145628]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:30 compute-0 sudo[145625]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:30 compute-0 sudo[145781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toudryvqnujachkpyjqsooqlvsxjhhls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947150.6720488-404-95271790074197/AnsiballZ_systemd.py'
Feb 24 15:32:30 compute-0 sudo[145781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:31 compute-0 python3.9[145784]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 24 15:32:31 compute-0 sudo[145781]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:31 compute-0 sudo[145937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwcbueijsmwcswxfxzdabkrevihibpme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947151.6108313-506-24864952983543/AnsiballZ_file.py'
Feb 24 15:32:31 compute-0 sudo[145937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:32 compute-0 python3.9[145940]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:32:32 compute-0 sudo[145937]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:32 compute-0 sudo[146090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlutpqzlxvvxtjcuspaiadfywgdenjoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947152.2395487-506-147382833549132/AnsiballZ_file.py'
Feb 24 15:32:32 compute-0 sudo[146090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:32 compute-0 python3.9[146093]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:32:32 compute-0 sudo[146090]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:33 compute-0 sudo[146243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjkvuvvtndrvsmuquehxqiadczwojitz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947152.8623526-506-155460055233465/AnsiballZ_file.py'
Feb 24 15:32:33 compute-0 sudo[146243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:33 compute-0 python3.9[146246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:32:33 compute-0 sudo[146243]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:33 compute-0 sudo[146396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wthicztcfgndxsshwnllmjrtfxthjych ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947153.5410109-506-101666941675634/AnsiballZ_file.py'
Feb 24 15:32:33 compute-0 sudo[146396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:34 compute-0 python3.9[146399]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:32:34 compute-0 sudo[146396]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:34 compute-0 sudo[146563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhcxqrnrovtqmsfcarosbfdanpxglcuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947154.2271338-506-264868518901675/AnsiballZ_file.py'
Feb 24 15:32:34 compute-0 sudo[146563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:34 compute-0 podman[146523]: 2026-02-24 15:32:34.559289781 +0000 UTC m=+0.121935435 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 15:32:34 compute-0 python3.9[146570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:32:34 compute-0 sudo[146563]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:35 compute-0 sudo[146730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugwsqrkbizyqwgxagplzejzqwhcoojet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947154.8416514-506-248966891861223/AnsiballZ_file.py'
Feb 24 15:32:35 compute-0 sudo[146730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:35 compute-0 python3.9[146733]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:32:35 compute-0 sudo[146730]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:36 compute-0 python3.9[146883]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:32:36 compute-0 sudo[147033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcywvtzvskqndvfvtmdteuxygzzblike ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947156.3229024-557-98240305035189/AnsiballZ_stat.py'
Feb 24 15:32:36 compute-0 sudo[147033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:36 compute-0 python3.9[147036]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:37 compute-0 sudo[147033]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:37 compute-0 sudo[147159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmdnleftbrajfjbaqjaglllihcqhpwwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947156.3229024-557-98240305035189/AnsiballZ_copy.py'
Feb 24 15:32:37 compute-0 sudo[147159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:37 compute-0 python3.9[147162]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771947156.3229024-557-98240305035189/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:37 compute-0 sudo[147159]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:38 compute-0 sudo[147312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vydwxvxrbppnafgfatxegnxkgatbzjen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947157.9086769-557-279148228116559/AnsiballZ_stat.py'
Feb 24 15:32:38 compute-0 sudo[147312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:38 compute-0 python3.9[147315]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:38 compute-0 sudo[147312]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:38 compute-0 sudo[147438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnrnxaorrjlwxxyvjocypvjwgplplrxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947157.9086769-557-279148228116559/AnsiballZ_copy.py'
Feb 24 15:32:38 compute-0 sudo[147438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:39 compute-0 python3.9[147441]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771947157.9086769-557-279148228116559/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:39 compute-0 sudo[147438]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:39 compute-0 sudo[147591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozmzficftssasgavpwobczodottljxez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947159.2577217-557-127956668642984/AnsiballZ_stat.py'
Feb 24 15:32:39 compute-0 sudo[147591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:39 compute-0 python3.9[147594]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:39 compute-0 sudo[147591]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:40 compute-0 sudo[147717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pegvmdamypacwcnzlhljnscszguvkeju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947159.2577217-557-127956668642984/AnsiballZ_copy.py'
Feb 24 15:32:40 compute-0 sudo[147717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:40 compute-0 python3.9[147720]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771947159.2577217-557-127956668642984/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:40 compute-0 sudo[147717]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:40 compute-0 sudo[147870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esdcqmywnbwtxokrfauamxvrwrromxdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947160.502052-557-30879316756437/AnsiballZ_stat.py'
Feb 24 15:32:40 compute-0 sudo[147870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:41 compute-0 python3.9[147873]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:41 compute-0 sudo[147870]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:41 compute-0 sudo[147996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plpafulnruebvqchcwbtveuudgrmpukc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947160.502052-557-30879316756437/AnsiballZ_copy.py'
Feb 24 15:32:41 compute-0 sudo[147996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:41 compute-0 python3.9[147999]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771947160.502052-557-30879316756437/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:41 compute-0 sudo[147996]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:42 compute-0 sudo[148149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vssbqjqvqpptkqtafdzdgxeqedcdhqnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947161.7938206-557-159348280837389/AnsiballZ_stat.py'
Feb 24 15:32:42 compute-0 sudo[148149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:42 compute-0 python3.9[148152]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:42 compute-0 sudo[148149]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:42 compute-0 sudo[148275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orvfzznztrkamdxbnetkgtincogyzhcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947161.7938206-557-159348280837389/AnsiballZ_copy.py'
Feb 24 15:32:42 compute-0 sudo[148275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:42 compute-0 python3.9[148278]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771947161.7938206-557-159348280837389/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:42 compute-0 sudo[148275]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:43 compute-0 sudo[148428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txctnfzshprhgmanmixtgixhkdqogtzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947163.0949035-557-50705679687718/AnsiballZ_stat.py'
Feb 24 15:32:43 compute-0 sudo[148428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:43 compute-0 python3.9[148431]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:43 compute-0 sudo[148428]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:43 compute-0 sudo[148554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emtimbfanrxudcogguzanzdktumkcbah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947163.0949035-557-50705679687718/AnsiballZ_copy.py'
Feb 24 15:32:43 compute-0 sudo[148554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:44 compute-0 python3.9[148557]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771947163.0949035-557-50705679687718/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:44 compute-0 sudo[148554]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:44 compute-0 sudo[148707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boyggpyzaggrybrvviripqdtghyytdzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947164.3502638-557-6461820596004/AnsiballZ_stat.py'
Feb 24 15:32:44 compute-0 sudo[148707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:44 compute-0 python3.9[148710]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:44 compute-0 sudo[148707]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:45 compute-0 sudo[148831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkbrvebxkdmzmbznbgbtimrqioojbrer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947164.3502638-557-6461820596004/AnsiballZ_copy.py'
Feb 24 15:32:45 compute-0 sudo[148831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:45 compute-0 python3.9[148834]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771947164.3502638-557-6461820596004/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:45 compute-0 sudo[148831]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:46 compute-0 sudo[148984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aactjjckictmbyjmugpmbengbsbqctar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947165.7029562-557-218921684981797/AnsiballZ_stat.py'
Feb 24 15:32:46 compute-0 sudo[148984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:46 compute-0 python3.9[148987]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:46 compute-0 sudo[148984]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:46 compute-0 sudo[149110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljxwcrgcpgipztjqdrpnxokuiisvhsph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947165.7029562-557-218921684981797/AnsiballZ_copy.py'
Feb 24 15:32:46 compute-0 sudo[149110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:46 compute-0 python3.9[149113]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771947165.7029562-557-218921684981797/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:46 compute-0 sudo[149110]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:47 compute-0 sudo[149263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-makjvkmrobhvvfbwmtndephcsqoeufhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947167.0141869-670-259920922170661/AnsiballZ_command.py'
Feb 24 15:32:47 compute-0 sudo[149263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:47 compute-0 python3.9[149266]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb 24 15:32:47 compute-0 sudo[149263]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:47 compute-0 sudo[149417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqfryjckwevbwvkdqaefzuyagmddtwfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947167.7219834-679-166710391702286/AnsiballZ_file.py'
Feb 24 15:32:47 compute-0 sudo[149417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:48 compute-0 python3.9[149420]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:48 compute-0 sudo[149417]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:48 compute-0 sudo[149570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouuxjcxquuuqckrybxycamlwsmipviye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947168.3365886-679-213071782247953/AnsiballZ_file.py'
Feb 24 15:32:48 compute-0 sudo[149570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:48 compute-0 python3.9[149573]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:48 compute-0 sudo[149570]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:49 compute-0 sudo[149723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkzphfoyytuoytetacmpjcgcwrhvzqdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947168.9157493-679-169071681856671/AnsiballZ_file.py'
Feb 24 15:32:49 compute-0 sudo[149723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:49 compute-0 python3.9[149726]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:49 compute-0 sudo[149723]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:49 compute-0 sudo[149876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbhdvquhacgpvfiqbfaliebgjvnvryu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947169.520042-679-20333265436857/AnsiballZ_file.py'
Feb 24 15:32:49 compute-0 sudo[149876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:49 compute-0 python3.9[149879]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:50 compute-0 sudo[149876]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:50 compute-0 sudo[150029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpafrqcpkdkequckbxlqgbqjburymejv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947170.1628296-679-154858622025162/AnsiballZ_file.py'
Feb 24 15:32:50 compute-0 sudo[150029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:50 compute-0 python3.9[150032]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:50 compute-0 sudo[150029]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:51 compute-0 sudo[150182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unlljagmnvsscbhdfcpndgmaqzyvmjyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947170.8246386-679-42231516677765/AnsiballZ_file.py'
Feb 24 15:32:51 compute-0 sudo[150182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:51 compute-0 python3.9[150185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:51 compute-0 sudo[150182]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:51 compute-0 sudo[150335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrpcnbbfdfuawzzhihrswtgzpgktduch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947171.4570854-679-258966895281880/AnsiballZ_file.py'
Feb 24 15:32:51 compute-0 sudo[150335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:51 compute-0 python3.9[150338]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:51 compute-0 sudo[150335]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:52 compute-0 sudo[150488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqexjyiriiingxwfzashuhqasylvjucf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947172.0119874-679-59762768571006/AnsiballZ_file.py'
Feb 24 15:32:52 compute-0 sudo[150488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:52 compute-0 python3.9[150491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:52 compute-0 sudo[150488]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:52 compute-0 sudo[150641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avrzlflatzinvoxelteqvpqrsdeofxec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947172.5977852-679-252174669280249/AnsiballZ_file.py'
Feb 24 15:32:52 compute-0 sudo[150641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:53 compute-0 python3.9[150644]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:53 compute-0 sudo[150641]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:53 compute-0 sudo[150794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfwwksfiprfcaxzzstwxqejjidgjrckn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947173.2466314-679-272055844310902/AnsiballZ_file.py'
Feb 24 15:32:53 compute-0 sudo[150794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:53 compute-0 python3.9[150797]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:53 compute-0 sudo[150794]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:54 compute-0 sudo[150947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmggcxyquktqaahmoyoixemlcyniwzep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947173.832907-679-267042856691708/AnsiballZ_file.py'
Feb 24 15:32:54 compute-0 sudo[150947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:54 compute-0 python3.9[150950]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:54 compute-0 sudo[150947]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:54 compute-0 sudo[151100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eustmbxsdnrpbeyboigxxcbeahjyxdpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947174.4400883-679-128612254584094/AnsiballZ_file.py'
Feb 24 15:32:54 compute-0 sudo[151100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:55 compute-0 python3.9[151103]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:55 compute-0 sudo[151100]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:32:55.410 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:32:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:32:55.411 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:32:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:32:55.411 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:32:55 compute-0 sudo[151253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkwomwqwfyxwflniuyshesmdettpbtjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947175.1933653-679-97378763241419/AnsiballZ_file.py'
Feb 24 15:32:55 compute-0 sudo[151253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:56 compute-0 python3.9[151256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:56 compute-0 sudo[151253]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:56 compute-0 podman[151257]: 2026-02-24 15:32:56.219768593 +0000 UTC m=+0.071902116 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:32:56 compute-0 sudo[151425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cycysbubbfqivaamcgygmmxtbeatfdox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947176.2498877-679-200884537256050/AnsiballZ_file.py'
Feb 24 15:32:56 compute-0 sudo[151425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:56 compute-0 python3.9[151428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:56 compute-0 sudo[151425]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:57 compute-0 sudo[151578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etyevofohogjcazyibwteftsccdhbolp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947176.8844173-778-252493986279943/AnsiballZ_stat.py'
Feb 24 15:32:57 compute-0 sudo[151578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:57 compute-0 python3.9[151581]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:57 compute-0 sudo[151578]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:57 compute-0 sudo[151702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xodxusazszxltmdhnkilmyemcpclvxju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947176.8844173-778-252493986279943/AnsiballZ_copy.py'
Feb 24 15:32:57 compute-0 sudo[151702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:57 compute-0 python3.9[151705]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947176.8844173-778-252493986279943/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:57 compute-0 sudo[151702]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:58 compute-0 sudo[151855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyfewywxyijlinoycamkmfjrckjydjaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947178.0557635-778-81544161693766/AnsiballZ_stat.py'
Feb 24 15:32:58 compute-0 sudo[151855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:58 compute-0 python3.9[151858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:58 compute-0 sudo[151855]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:58 compute-0 sudo[151979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvicddimehodvdkcxhluxodrjzpmfehs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947178.0557635-778-81544161693766/AnsiballZ_copy.py'
Feb 24 15:32:58 compute-0 sudo[151979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:59 compute-0 python3.9[151982]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947178.0557635-778-81544161693766/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:32:59 compute-0 sudo[151979]: pam_unix(sudo:session): session closed for user root
Feb 24 15:32:59 compute-0 sudo[152132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcfjwnnbfzomihfhbkftgaazgmbxjcri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947179.3239942-778-149954030981954/AnsiballZ_stat.py'
Feb 24 15:32:59 compute-0 sudo[152132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:32:59 compute-0 python3.9[152135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:32:59 compute-0 sudo[152132]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:00 compute-0 sudo[152256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutospbhypaifyvzrdefhiumjqcrqvhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947179.3239942-778-149954030981954/AnsiballZ_copy.py'
Feb 24 15:33:00 compute-0 sudo[152256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:00 compute-0 python3.9[152259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947179.3239942-778-149954030981954/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:00 compute-0 sudo[152256]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:00 compute-0 sudo[152409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oatdmupgwsuysmxzcitcivwyqztazhdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947180.5553637-778-209365649974026/AnsiballZ_stat.py'
Feb 24 15:33:00 compute-0 sudo[152409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:01 compute-0 python3.9[152412]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:01 compute-0 sudo[152409]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:01 compute-0 sudo[152533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enpxrpdjupwzkbdlgxuzaiprsebfimtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947180.5553637-778-209365649974026/AnsiballZ_copy.py'
Feb 24 15:33:01 compute-0 sudo[152533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:01 compute-0 python3.9[152536]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947180.5553637-778-209365649974026/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:01 compute-0 sudo[152533]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:02 compute-0 sudo[152686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtlpowfatpucgfisirgnbhnseeevtyuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947181.7707431-778-141954910452690/AnsiballZ_stat.py'
Feb 24 15:33:02 compute-0 sudo[152686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:02 compute-0 python3.9[152689]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:02 compute-0 sudo[152686]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:02 compute-0 sudo[152810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crerjczrlcevkcubqeemvqmafubxkmla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947181.7707431-778-141954910452690/AnsiballZ_copy.py'
Feb 24 15:33:02 compute-0 sudo[152810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:02 compute-0 python3.9[152813]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947181.7707431-778-141954910452690/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:02 compute-0 sudo[152810]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:03 compute-0 sudo[152963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gooudbahhrjtvluvoulnqjujylqnzyou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947183.125829-778-40293902643316/AnsiballZ_stat.py'
Feb 24 15:33:03 compute-0 sudo[152963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:03 compute-0 python3.9[152966]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:03 compute-0 sudo[152963]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:03 compute-0 sudo[153087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvgjrlgrhbampgllvnlkwhaoqhbvfifo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947183.125829-778-40293902643316/AnsiballZ_copy.py'
Feb 24 15:33:03 compute-0 sudo[153087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:04 compute-0 python3.9[153090]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947183.125829-778-40293902643316/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:04 compute-0 sudo[153087]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:04 compute-0 sudo[153251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ittikdwrrvijqpxlrrfpjshfjmrcfozd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947184.323999-778-75251986656077/AnsiballZ_stat.py'
Feb 24 15:33:04 compute-0 sudo[153251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:04 compute-0 podman[153214]: 2026-02-24 15:33:04.694100055 +0000 UTC m=+0.090842046 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 15:33:04 compute-0 python3.9[153263]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:04 compute-0 sudo[153251]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:05 compute-0 sudo[153391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwizzhaaespkhntfgwhddtsqqmgosdhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947184.323999-778-75251986656077/AnsiballZ_copy.py'
Feb 24 15:33:05 compute-0 sudo[153391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:05 compute-0 python3.9[153394]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947184.323999-778-75251986656077/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:05 compute-0 sudo[153391]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:05 compute-0 sudo[153544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swwrfkhrxslgjeoxsgrmbpwfkauosaiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947185.5665858-778-130151453032628/AnsiballZ_stat.py'
Feb 24 15:33:05 compute-0 sudo[153544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:06 compute-0 python3.9[153547]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:06 compute-0 sudo[153544]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:06 compute-0 sudo[153668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncpeqbaifybalzdxmxjmlhlcdqfkpjek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947185.5665858-778-130151453032628/AnsiballZ_copy.py'
Feb 24 15:33:06 compute-0 sudo[153668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:06 compute-0 python3.9[153671]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947185.5665858-778-130151453032628/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:06 compute-0 sudo[153668]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:07 compute-0 sudo[153821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxzaylpzccbrvkgvrqishacemxhcddk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947186.8976834-778-44436368455419/AnsiballZ_stat.py'
Feb 24 15:33:07 compute-0 sudo[153821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:07 compute-0 python3.9[153824]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:07 compute-0 sudo[153821]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:07 compute-0 sudo[153945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irofvgxgmuycnqcsgbykvhojjwrlgwoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947186.8976834-778-44436368455419/AnsiballZ_copy.py'
Feb 24 15:33:07 compute-0 sudo[153945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:07 compute-0 python3.9[153948]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947186.8976834-778-44436368455419/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:07 compute-0 sudo[153945]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:08 compute-0 sudo[154098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgrhqprkrxckqflfzyqldvdtfplzvcbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947188.0776558-778-73633218967074/AnsiballZ_stat.py'
Feb 24 15:33:08 compute-0 sudo[154098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:08 compute-0 python3.9[154101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:08 compute-0 sudo[154098]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:08 compute-0 sudo[154222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pitszpxqhuqajredqtsygafbsxxbpohm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947188.0776558-778-73633218967074/AnsiballZ_copy.py'
Feb 24 15:33:08 compute-0 sudo[154222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:09 compute-0 python3.9[154225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947188.0776558-778-73633218967074/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:09 compute-0 sudo[154222]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:09 compute-0 sudo[154375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trqykvttxwhoovbbizeohkcrcvueowkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947189.292527-778-36503961376651/AnsiballZ_stat.py'
Feb 24 15:33:09 compute-0 sudo[154375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:09 compute-0 python3.9[154378]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:09 compute-0 sudo[154375]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:10 compute-0 sudo[154499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffdflhtiinfotcsuwewuhuwjmrpybxtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947189.292527-778-36503961376651/AnsiballZ_copy.py'
Feb 24 15:33:10 compute-0 sudo[154499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:10 compute-0 python3.9[154502]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947189.292527-778-36503961376651/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:10 compute-0 sudo[154499]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:11 compute-0 sudo[154652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrykcrqyloivauivoqgtwrsnvomisqnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947190.5706995-778-200594548303279/AnsiballZ_stat.py'
Feb 24 15:33:11 compute-0 sudo[154652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:11 compute-0 python3.9[154655]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:11 compute-0 sudo[154652]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:11 compute-0 sudo[154776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agrydzniwwudybvjzdslrkmiilflmfae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947190.5706995-778-200594548303279/AnsiballZ_copy.py'
Feb 24 15:33:11 compute-0 sudo[154776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:11 compute-0 python3.9[154779]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947190.5706995-778-200594548303279/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:11 compute-0 sudo[154776]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:12 compute-0 sudo[154929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwzrlpzwjvvgjminxwsdihebjplbqnaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947191.9189742-778-248258076431103/AnsiballZ_stat.py'
Feb 24 15:33:12 compute-0 sudo[154929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:12 compute-0 python3.9[154932]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:12 compute-0 sudo[154929]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:12 compute-0 sudo[155053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjkjimenhlhbobhhrezzeuzvxpmaqxkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947191.9189742-778-248258076431103/AnsiballZ_copy.py'
Feb 24 15:33:12 compute-0 sudo[155053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:12 compute-0 python3.9[155056]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947191.9189742-778-248258076431103/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:12 compute-0 sudo[155053]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:13 compute-0 sudo[155206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnxqyicdwwrfjsgyglpotsglnhhfxhgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947193.1232424-778-153746813294998/AnsiballZ_stat.py'
Feb 24 15:33:13 compute-0 sudo[155206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:13 compute-0 python3.9[155209]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:13 compute-0 sudo[155206]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:14 compute-0 sudo[155330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sotaiwdsxdmfvgvjpquaeyjkkarinhse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947193.1232424-778-153746813294998/AnsiballZ_copy.py'
Feb 24 15:33:14 compute-0 sudo[155330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:14 compute-0 python3.9[155333]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947193.1232424-778-153746813294998/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:14 compute-0 sudo[155330]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:14 compute-0 python3.9[155483]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:33:15 compute-0 sudo[155636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfqdhhnumauzctfbfmbcudilcskklxkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947195.223267-984-50175132603867/AnsiballZ_seboolean.py'
Feb 24 15:33:15 compute-0 sudo[155636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:16 compute-0 python3.9[155639]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 24 15:33:17 compute-0 sudo[155636]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:17 compute-0 sudo[155793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hibtfkqmdxgbzdntgfntkzoqfjujnpup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947197.2292721-992-142499272513668/AnsiballZ_copy.py'
Feb 24 15:33:17 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb 24 15:33:17 compute-0 sudo[155793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:17 compute-0 python3.9[155796]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:17 compute-0 sudo[155793]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:18 compute-0 sudo[155946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngwrbtiskyclfyerrjpbonudbgxusxws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947197.9141622-992-123758725685783/AnsiballZ_copy.py'
Feb 24 15:33:18 compute-0 sudo[155946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:18 compute-0 python3.9[155949]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:18 compute-0 sudo[155946]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:18 compute-0 sudo[156099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjgnoozntsnsugeauowvcgebnxbagjye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947198.6225376-992-10307498531006/AnsiballZ_copy.py'
Feb 24 15:33:18 compute-0 sudo[156099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:19 compute-0 python3.9[156102]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:19 compute-0 sudo[156099]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:19 compute-0 sudo[156254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikfrgyuipnhrtyvvsulwdajouapveeoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947199.3208127-992-26859837211080/AnsiballZ_copy.py'
Feb 24 15:33:19 compute-0 sudo[156254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:19 compute-0 python3.9[156257]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:19 compute-0 sudo[156254]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:20 compute-0 sudo[156407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxaorxtxsalmkiynwkrzrwarssadplzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947200.0188885-992-22159865510168/AnsiballZ_copy.py'
Feb 24 15:33:20 compute-0 sudo[156407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:20 compute-0 python3.9[156410]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:20 compute-0 sudo[156407]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:21 compute-0 sudo[156560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcsrncauysctdmrjnjleyeeqykouxatt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947200.7751706-1028-31058208717201/AnsiballZ_copy.py'
Feb 24 15:33:21 compute-0 sudo[156560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:21 compute-0 python3.9[156563]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:21 compute-0 sudo[156560]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:21 compute-0 sudo[156713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipjaognycsteiyqdodtxwbvdaqdvsuyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947201.4942496-1028-85880832701131/AnsiballZ_copy.py'
Feb 24 15:33:21 compute-0 sudo[156713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:22 compute-0 python3.9[156716]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:22 compute-0 sudo[156713]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:22 compute-0 sudo[156866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaipgulupkoiakmodalflkdqmjhcqmlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947202.1698184-1028-4736924776927/AnsiballZ_copy.py'
Feb 24 15:33:22 compute-0 sudo[156866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:22 compute-0 python3.9[156869]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:22 compute-0 sudo[156866]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:23 compute-0 sudo[157019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tivkesxxnoksxiqaiklyuwdrtwqghoyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947202.883176-1028-212780794050908/AnsiballZ_copy.py'
Feb 24 15:33:23 compute-0 sudo[157019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:23 compute-0 python3.9[157022]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:23 compute-0 sudo[157019]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:23 compute-0 sudo[157172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyxiqazonpuvhxoligtgtambfiqgtopu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947203.556258-1028-68284059503086/AnsiballZ_copy.py'
Feb 24 15:33:23 compute-0 sudo[157172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:23 compute-0 sshd-session[156103]: Invalid user ubuntu from 120.48.56.86 port 38236
Feb 24 15:33:24 compute-0 python3.9[157175]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:24 compute-0 sudo[157172]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:24 compute-0 sshd-session[156103]: Received disconnect from 120.48.56.86 port 38236:11:  [preauth]
Feb 24 15:33:24 compute-0 sshd-session[156103]: Disconnected from invalid user ubuntu 120.48.56.86 port 38236 [preauth]
Feb 24 15:33:24 compute-0 sudo[157325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqbspfcojydqgcplqkuoqoesvtbdeubc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947204.290154-1064-76701592096386/AnsiballZ_systemd.py'
Feb 24 15:33:24 compute-0 sudo[157325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:24 compute-0 python3.9[157328]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:33:24 compute-0 systemd[1]: Reloading.
Feb 24 15:33:25 compute-0 systemd-rc-local-generator[157350]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:33:25 compute-0 systemd-sysv-generator[157355]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:33:26 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Feb 24 15:33:26 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Feb 24 15:33:26 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Feb 24 15:33:26 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb 24 15:33:26 compute-0 systemd[1]: Starting libvirt logging daemon...
Feb 24 15:33:26 compute-0 podman[157374]: 2026-02-24 15:33:26.33224023 +0000 UTC m=+0.068272374 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 24 15:33:26 compute-0 systemd[1]: Started libvirt logging daemon.
Feb 24 15:33:26 compute-0 sudo[157325]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:26 compute-0 sudo[157545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slxfxsfmjnoddeprwvrlrkdetkiocorg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947206.5377057-1064-262917038381988/AnsiballZ_systemd.py'
Feb 24 15:33:26 compute-0 sudo[157545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:27 compute-0 python3.9[157548]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:33:27 compute-0 systemd[1]: Reloading.
Feb 24 15:33:27 compute-0 systemd-rc-local-generator[157565]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:33:27 compute-0 systemd-sysv-generator[157572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:33:27 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Feb 24 15:33:27 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb 24 15:33:27 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb 24 15:33:27 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb 24 15:33:27 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb 24 15:33:27 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb 24 15:33:27 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 24 15:33:27 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 24 15:33:27 compute-0 sudo[157545]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:27 compute-0 sudo[157769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lckbblisirviayxsuvxpmxdoqxjpznuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947207.7374148-1064-141058145812324/AnsiballZ_systemd.py'
Feb 24 15:33:28 compute-0 sudo[157769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:28 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 24 15:33:28 compute-0 python3.9[157772]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:33:28 compute-0 systemd[1]: Reloading.
Feb 24 15:33:28 compute-0 systemd-sysv-generator[157804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:33:28 compute-0 systemd-rc-local-generator[157797]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:33:28 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 24 15:33:28 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb 24 15:33:28 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb 24 15:33:28 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb 24 15:33:28 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb 24 15:33:28 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 24 15:33:28 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 24 15:33:28 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb 24 15:33:28 compute-0 sudo[157769]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:28 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb 24 15:33:29 compute-0 sudo[157997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmkztrcfnafjimpyopwdtyvsrjxijzdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947208.8558176-1064-234687450228243/AnsiballZ_systemd.py'
Feb 24 15:33:29 compute-0 sudo[157997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:29 compute-0 python3.9[158000]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:33:29 compute-0 systemd[1]: Reloading.
Feb 24 15:33:29 compute-0 systemd-sysv-generator[158025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:33:29 compute-0 systemd-rc-local-generator[158022]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:33:29 compute-0 setroubleshoot[157773]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 19a30608-0c46-40a6-9822-20c33f69e401
Feb 24 15:33:29 compute-0 setroubleshoot[157773]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 24 15:33:29 compute-0 setroubleshoot[157773]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 19a30608-0c46-40a6-9822-20c33f69e401
Feb 24 15:33:29 compute-0 rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:33:29 compute-0 rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:33:29 compute-0 setroubleshoot[157773]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 24 15:33:29 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Feb 24 15:33:29 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Feb 24 15:33:29 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 24 15:33:29 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb 24 15:33:29 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb 24 15:33:29 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb 24 15:33:29 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb 24 15:33:29 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb 24 15:33:29 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb 24 15:33:29 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb 24 15:33:29 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 24 15:33:29 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 24 15:33:29 compute-0 sudo[157997]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:30 compute-0 sudo[158222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjxnihsbtlvirewxhimcxtrasvgfrxxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947210.0365386-1064-75521939819587/AnsiballZ_systemd.py'
Feb 24 15:33:30 compute-0 sudo[158222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:30 compute-0 python3.9[158225]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:33:30 compute-0 systemd[1]: Reloading.
Feb 24 15:33:30 compute-0 systemd-rc-local-generator[158246]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:33:30 compute-0 systemd-sysv-generator[158251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:33:30 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Feb 24 15:33:30 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Feb 24 15:33:30 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Feb 24 15:33:30 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb 24 15:33:30 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb 24 15:33:30 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb 24 15:33:30 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 24 15:33:31 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 24 15:33:31 compute-0 sudo[158222]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:31 compute-0 sudo[158442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxbbmfsikfruxsaunkorgjmfhytdvlpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947211.3466735-1101-93651366206943/AnsiballZ_file.py'
Feb 24 15:33:31 compute-0 sudo[158442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:31 compute-0 python3.9[158445]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:31 compute-0 sudo[158442]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:32 compute-0 sudo[158595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nygruxfubctmlxeydwsodrbgqsqwmpyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947212.0668619-1109-179536455450701/AnsiballZ_find.py'
Feb 24 15:33:32 compute-0 sudo[158595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:32 compute-0 python3.9[158598]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 24 15:33:32 compute-0 sudo[158595]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:33 compute-0 sudo[158748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqkokfngigjbvuknrxwsgvhwqensyabz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947213.0222313-1123-57174793812876/AnsiballZ_stat.py'
Feb 24 15:33:33 compute-0 sudo[158748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:33 compute-0 python3.9[158751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:33 compute-0 sudo[158748]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:33 compute-0 sudo[158872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daibfbxaqgyytfxolajdppcvnxreppfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947213.0222313-1123-57174793812876/AnsiballZ_copy.py'
Feb 24 15:33:33 compute-0 sudo[158872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:34 compute-0 python3.9[158875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947213.0222313-1123-57174793812876/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:34 compute-0 sudo[158872]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:34 compute-0 sudo[159038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bikqsraoemvduakuiljjpyliykztxmql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947214.4839952-1139-249232762002423/AnsiballZ_file.py'
Feb 24 15:33:34 compute-0 sudo[159038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:34 compute-0 podman[158999]: 2026-02-24 15:33:34.82814081 +0000 UTC m=+0.087581257 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 24 15:33:34 compute-0 python3.9[159047]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:34 compute-0 sudo[159038]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:35 compute-0 sudo[159205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfzmdqidcqqkacufpjhvtxzujhakuvhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947215.204839-1147-213597246780221/AnsiballZ_stat.py'
Feb 24 15:33:35 compute-0 sudo[159205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:35 compute-0 python3.9[159208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:35 compute-0 sudo[159205]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:35 compute-0 sudo[159284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhixgafvoeuhuyptoktlsacapslslxmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947215.204839-1147-213597246780221/AnsiballZ_file.py'
Feb 24 15:33:35 compute-0 sudo[159284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:36 compute-0 python3.9[159287]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:36 compute-0 sudo[159284]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:36 compute-0 sudo[159437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcwockganwxqdflhprbvfydipqjuicgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947216.3655722-1159-104340676627727/AnsiballZ_stat.py'
Feb 24 15:33:36 compute-0 sudo[159437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:36 compute-0 python3.9[159440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:36 compute-0 sudo[159437]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:37 compute-0 sudo[159516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxpmmglsbcupxcuqlihnucahculkpolh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947216.3655722-1159-104340676627727/AnsiballZ_file.py'
Feb 24 15:33:37 compute-0 sudo[159516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:37 compute-0 python3.9[159519]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.m0c_75l4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:37 compute-0 sudo[159516]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:37 compute-0 sudo[159669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhdwhfizlknjmkxfjlesbdfxgzwtsach ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947217.6053462-1171-255703326264358/AnsiballZ_stat.py'
Feb 24 15:33:37 compute-0 sudo[159669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:38 compute-0 python3.9[159672]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:38 compute-0 sudo[159669]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:38 compute-0 sudo[159748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpappisvyhmzultslkgspcvgevpvgcwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947217.6053462-1171-255703326264358/AnsiballZ_file.py'
Feb 24 15:33:38 compute-0 sudo[159748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:38 compute-0 python3.9[159751]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:38 compute-0 sudo[159748]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:39 compute-0 sudo[159901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whvhrmwjepkdigurbofjnrjhyqecsbhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947218.7875075-1184-160096075363269/AnsiballZ_command.py'
Feb 24 15:33:39 compute-0 sudo[159901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:39 compute-0 python3.9[159904]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:33:39 compute-0 sudo[159901]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:39 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb 24 15:33:39 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 24 15:33:39 compute-0 sudo[160055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmkbofnylahmucpmgdzdrgspgktbuqkn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947219.5163078-1192-83696383860395/AnsiballZ_edpm_nftables_from_files.py'
Feb 24 15:33:39 compute-0 sudo[160055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:40 compute-0 python3[160058]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 24 15:33:40 compute-0 sudo[160055]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:40 compute-0 sudo[160208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmwyavefclfyvfcrbivyxoinvhoedbwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947220.2846951-1200-7099140110741/AnsiballZ_stat.py'
Feb 24 15:33:40 compute-0 sudo[160208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:40 compute-0 python3.9[160211]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:40 compute-0 sudo[160208]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:41 compute-0 sudo[160287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skkvzmngxncogyhksxenfkdnzftasfdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947220.2846951-1200-7099140110741/AnsiballZ_file.py'
Feb 24 15:33:41 compute-0 sudo[160287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:41 compute-0 python3.9[160290]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:41 compute-0 sudo[160287]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:41 compute-0 sudo[160440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpeifwvjrzgntlltynuffssinruautxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947221.394693-1212-155773006679923/AnsiballZ_stat.py'
Feb 24 15:33:41 compute-0 sudo[160440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:41 compute-0 python3.9[160443]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:42 compute-0 sudo[160440]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:42 compute-0 sudo[160566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckzqfsltakpkysuvqxpywerckuypwdsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947221.394693-1212-155773006679923/AnsiballZ_copy.py'
Feb 24 15:33:42 compute-0 sudo[160566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:42 compute-0 python3.9[160569]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947221.394693-1212-155773006679923/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:42 compute-0 sudo[160566]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:43 compute-0 sudo[160719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmmzmwtzknnlsvovzqrvtnqyrydorrzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947222.7650456-1227-214021735995684/AnsiballZ_stat.py'
Feb 24 15:33:43 compute-0 sudo[160719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:43 compute-0 python3.9[160722]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:43 compute-0 sudo[160719]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:43 compute-0 sudo[160798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klicrwkqqfncijoqarbazzclrlkwkvdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947222.7650456-1227-214021735995684/AnsiballZ_file.py'
Feb 24 15:33:43 compute-0 sudo[160798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:43 compute-0 python3.9[160801]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:43 compute-0 sudo[160798]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:44 compute-0 sudo[160951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddxnxlkppgznsxdiaagczjctycvgunoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947224.049124-1239-224928265280821/AnsiballZ_stat.py'
Feb 24 15:33:44 compute-0 sudo[160951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:44 compute-0 python3.9[160954]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:44 compute-0 sudo[160951]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:44 compute-0 sudo[161030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bizfyjouetqkadwlkkoziexdqbiwgvsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947224.049124-1239-224928265280821/AnsiballZ_file.py'
Feb 24 15:33:44 compute-0 sudo[161030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:45 compute-0 python3.9[161033]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:45 compute-0 sudo[161030]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:45 compute-0 sudo[161183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jikdavabewrfzzcushldvekslpxhwmtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947225.280934-1251-184923249243789/AnsiballZ_stat.py'
Feb 24 15:33:45 compute-0 sudo[161183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:45 compute-0 python3.9[161186]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:45 compute-0 sudo[161183]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:46 compute-0 sudo[161309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ernhhzajvfnbrccmgdwwggcsvwmrjlcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947225.280934-1251-184923249243789/AnsiballZ_copy.py'
Feb 24 15:33:46 compute-0 sudo[161309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:46 compute-0 python3.9[161312]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947225.280934-1251-184923249243789/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:46 compute-0 sudo[161309]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:47 compute-0 sshd-session[161412]: Connection closed by authenticating user root 64.236.161.24 port 46112 [preauth]
Feb 24 15:33:47 compute-0 sudo[161464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndhuvvuqfzknmdvbkwptnwyiuucrqhpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947226.8409114-1266-222190840186650/AnsiballZ_file.py'
Feb 24 15:33:47 compute-0 sudo[161464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:47 compute-0 python3.9[161467]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:47 compute-0 sudo[161464]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:47 compute-0 sudo[161617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxmksejzizekhsnkjxbrhrtltpzpebfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947227.5667207-1274-226740722398364/AnsiballZ_command.py'
Feb 24 15:33:47 compute-0 sudo[161617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:48 compute-0 python3.9[161620]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:33:48 compute-0 sudo[161617]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:48 compute-0 sudo[161773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prdilscxpphxzextriupskwkcgposdjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947228.2868898-1282-262925255764156/AnsiballZ_blockinfile.py'
Feb 24 15:33:48 compute-0 sudo[161773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:48 compute-0 python3.9[161776]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:48 compute-0 sudo[161773]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:49 compute-0 sudo[161926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbxptnryuxjslmqzindqzhhdmnfqxabh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947229.141224-1291-199735591708599/AnsiballZ_command.py'
Feb 24 15:33:49 compute-0 sudo[161926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:49 compute-0 python3.9[161929]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:33:49 compute-0 sudo[161926]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:50 compute-0 sudo[162080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfdigsdvcqtemrbfavorafzgvdcmqhkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947229.8800275-1299-211611482012218/AnsiballZ_stat.py'
Feb 24 15:33:50 compute-0 sudo[162080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:50 compute-0 python3.9[162083]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:33:50 compute-0 sudo[162080]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:51 compute-0 sudo[162235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjkcetjbolinypqoiwkhxrohfkynqrje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947230.6483295-1307-233981790850779/AnsiballZ_command.py'
Feb 24 15:33:51 compute-0 sudo[162235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:51 compute-0 python3.9[162238]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:33:51 compute-0 sudo[162235]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:51 compute-0 sudo[162391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znxatxrwpmwngszpxemknzsanbgclkrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947231.4766006-1315-24509651026910/AnsiballZ_file.py'
Feb 24 15:33:51 compute-0 sudo[162391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:51 compute-0 python3.9[162394]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:52 compute-0 sudo[162391]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:52 compute-0 sudo[162544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-katkidvjaogyzqzuybrbcvuzzyrzolru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947232.1963048-1323-57884169642422/AnsiballZ_stat.py'
Feb 24 15:33:52 compute-0 sudo[162544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:52 compute-0 python3.9[162547]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:52 compute-0 sudo[162544]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:53 compute-0 sudo[162668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqziecxhhawgzepqqcokddzhpycknspx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947232.1963048-1323-57884169642422/AnsiballZ_copy.py'
Feb 24 15:33:53 compute-0 sudo[162668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:53 compute-0 python3.9[162671]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947232.1963048-1323-57884169642422/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:53 compute-0 sudo[162668]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:53 compute-0 sudo[162821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsdscfeciqaxyftdhiuarbluinwfztlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947233.4904602-1338-48318513784341/AnsiballZ_stat.py'
Feb 24 15:33:53 compute-0 sudo[162821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:54 compute-0 python3.9[162824]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:54 compute-0 sudo[162821]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:54 compute-0 sudo[162945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbvletzxepatdgdbjdheotgcxaagqqkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947233.4904602-1338-48318513784341/AnsiballZ_copy.py'
Feb 24 15:33:54 compute-0 sudo[162945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:54 compute-0 python3.9[162948]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947233.4904602-1338-48318513784341/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:54 compute-0 sudo[162945]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:33:55.411 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:33:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:33:55.411 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:33:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:33:55.411 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:33:55 compute-0 sudo[163098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azarsiqcefibbbavzkqivlslorolqvci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947234.8597338-1353-158870631606957/AnsiballZ_stat.py'
Feb 24 15:33:55 compute-0 sudo[163098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:55 compute-0 python3.9[163101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:33:55 compute-0 sudo[163098]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:56 compute-0 sudo[163222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxrzjhqiltacwjxjjomtzlvkugfgrgzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947234.8597338-1353-158870631606957/AnsiballZ_copy.py'
Feb 24 15:33:56 compute-0 sudo[163222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:56 compute-0 python3.9[163225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947234.8597338-1353-158870631606957/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:33:56 compute-0 sudo[163222]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:56 compute-0 sudo[163385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsqrylywavmiprnvkmfyfzptysuarnur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947236.427569-1368-250988419253278/AnsiballZ_systemd.py'
Feb 24 15:33:56 compute-0 sudo[163385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:56 compute-0 podman[163349]: 2026-02-24 15:33:56.741177905 +0000 UTC m=+0.072179770 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Feb 24 15:33:56 compute-0 python3.9[163391]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:33:56 compute-0 systemd[1]: Reloading.
Feb 24 15:33:57 compute-0 systemd-rc-local-generator[163421]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:33:57 compute-0 systemd-sysv-generator[163429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:33:57 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Feb 24 15:33:57 compute-0 sudo[163385]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:57 compute-0 sudo[163592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaggedmdkhozywnunserigrjdkowupjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947237.528092-1376-221019732949683/AnsiballZ_systemd.py'
Feb 24 15:33:57 compute-0 sudo[163592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:33:58 compute-0 python3.9[163595]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 24 15:33:58 compute-0 systemd[1]: Reloading.
Feb 24 15:33:58 compute-0 systemd-sysv-generator[163622]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:33:58 compute-0 systemd-rc-local-generator[163618]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:33:58 compute-0 systemd[1]: Reloading.
Feb 24 15:33:58 compute-0 systemd-rc-local-generator[163662]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:33:58 compute-0 systemd-sysv-generator[163667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:33:58 compute-0 sudo[163592]: pam_unix(sudo:session): session closed for user root
Feb 24 15:33:59 compute-0 sshd-session[108600]: Connection closed by 192.168.122.30 port 49436
Feb 24 15:33:59 compute-0 sshd-session[108582]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:33:59 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Feb 24 15:33:59 compute-0 systemd[1]: session-22.scope: Consumed 3min 10.957s CPU time.
Feb 24 15:33:59 compute-0 systemd-logind[813]: Session 22 logged out. Waiting for processes to exit.
Feb 24 15:33:59 compute-0 systemd-logind[813]: Removed session 22.
Feb 24 15:34:04 compute-0 sshd-session[163706]: Accepted publickey for zuul from 192.168.122.30 port 36340 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:34:04 compute-0 systemd-logind[813]: New session 23 of user zuul.
Feb 24 15:34:04 compute-0 systemd[1]: Started Session 23 of User zuul.
Feb 24 15:34:04 compute-0 sshd-session[163706]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:34:05 compute-0 podman[163833]: 2026-02-24 15:34:05.134284066 +0000 UTC m=+0.099404629 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 15:34:05 compute-0 python3.9[163870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:34:06 compute-0 python3.9[164040]: ansible-ansible.builtin.service_facts Invoked
Feb 24 15:34:06 compute-0 network[164057]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 24 15:34:06 compute-0 network[164058]: 'network-scripts' will be removed from distribution in near future.
Feb 24 15:34:06 compute-0 network[164059]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 24 15:34:09 compute-0 sudo[164329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqgmgdvsgzzjxoefpczsbxepfbzjomln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947249.6652496-42-270162064559558/AnsiballZ_setup.py'
Feb 24 15:34:09 compute-0 sudo[164329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:10 compute-0 python3.9[164332]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:34:10 compute-0 sudo[164329]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:11 compute-0 sudo[164414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhpqtimcywueamqfzwvjrmznzdpgyvup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947249.6652496-42-270162064559558/AnsiballZ_dnf.py'
Feb 24 15:34:11 compute-0 sudo[164414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:11 compute-0 python3.9[164417]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:34:16 compute-0 sudo[164414]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:16 compute-0 sudo[164568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwskaticdkdygfigwtqhhbupzlvpyfyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947256.3208718-54-43990457529166/AnsiballZ_stat.py'
Feb 24 15:34:16 compute-0 sudo[164568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:17 compute-0 python3.9[164571]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:34:17 compute-0 sudo[164568]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:17 compute-0 sudo[164721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hixkwxgshhpzuvjsairkbgtxbxvnpldj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947257.3049164-64-156123380279701/AnsiballZ_command.py'
Feb 24 15:34:17 compute-0 sudo[164721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:17 compute-0 python3.9[164724]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:34:18 compute-0 sudo[164721]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:18 compute-0 sudo[164875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgdeljjdikkitzdqvqvdgohokqtspjzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947258.350161-74-236164172989425/AnsiballZ_stat.py'
Feb 24 15:34:18 compute-0 sudo[164875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:18 compute-0 python3.9[164878]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:34:18 compute-0 sudo[164875]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:19 compute-0 sudo[165028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inyjpjpxpvnjodezqtkuepyrakwdqhyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947259.0816815-82-129489762526171/AnsiballZ_command.py'
Feb 24 15:34:19 compute-0 sudo[165028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:19 compute-0 python3.9[165031]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:34:19 compute-0 sudo[165028]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:20 compute-0 sudo[165182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxixigrdravrusucowrytllrmfezbluj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947259.7803738-90-221875064077515/AnsiballZ_stat.py'
Feb 24 15:34:20 compute-0 sudo[165182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:20 compute-0 python3.9[165185]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:34:20 compute-0 sudo[165182]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:20 compute-0 sudo[165306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqvsrusttpoqkxpajeidccjwltbyggxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947259.7803738-90-221875064077515/AnsiballZ_copy.py'
Feb 24 15:34:20 compute-0 sudo[165306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:21 compute-0 python3.9[165309]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947259.7803738-90-221875064077515/.source.iscsi _original_basename=._dsaubg6 follow=False checksum=fa98a99ee56eeaf914d017f1958b9be0273df50e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:21 compute-0 sudo[165306]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:21 compute-0 sudo[165459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwpskmlzxsrtbtidqyggeachqktswkfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947261.3078182-105-11759960986151/AnsiballZ_file.py'
Feb 24 15:34:21 compute-0 sudo[165459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:21 compute-0 python3.9[165462]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:21 compute-0 sudo[165459]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:22 compute-0 sudo[165612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idasidooimaokpxrieszvldaflcegpog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947262.2179196-113-220683877195546/AnsiballZ_lineinfile.py'
Feb 24 15:34:22 compute-0 sudo[165612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:22 compute-0 python3.9[165615]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:22 compute-0 sudo[165612]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:23 compute-0 sudo[165765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkrtsomnyouawcfrrongckvduclcmqfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947263.1578155-122-217576498674034/AnsiballZ_systemd_service.py'
Feb 24 15:34:23 compute-0 sudo[165765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:24 compute-0 python3.9[165768]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:34:25 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 24 15:34:25 compute-0 sudo[165765]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:25 compute-0 sudo[165922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xelxyqponmiiuyjmumjeokedkiwpyyog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947265.3193357-130-61045277630428/AnsiballZ_systemd_service.py'
Feb 24 15:34:25 compute-0 sudo[165922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:25 compute-0 python3.9[165925]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:34:25 compute-0 systemd[1]: Reloading.
Feb 24 15:34:26 compute-0 systemd-sysv-generator[165957]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:34:26 compute-0 systemd-rc-local-generator[165949]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:34:26 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 24 15:34:26 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 24 15:34:26 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Feb 24 15:34:26 compute-0 systemd[1]: Started Open-iSCSI.
Feb 24 15:34:26 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb 24 15:34:26 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb 24 15:34:26 compute-0 sudo[165922]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:27 compute-0 podman[166081]: 2026-02-24 15:34:27.120304212 +0000 UTC m=+0.074840012 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:34:27 compute-0 python3.9[166151]: ansible-ansible.builtin.service_facts Invoked
Feb 24 15:34:27 compute-0 network[166169]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 24 15:34:27 compute-0 network[166170]: 'network-scripts' will be removed from distribution in near future.
Feb 24 15:34:27 compute-0 network[166171]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 24 15:34:27 compute-0 sshd-session[166177]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 15:34:30 compute-0 sudo[166443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktcmvfiovrjuwmynznkoomthhfmdadua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947270.5874186-153-75584138650139/AnsiballZ_dnf.py'
Feb 24 15:34:30 compute-0 sudo[166443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:31 compute-0 python3.9[166446]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:34:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 24 15:34:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 24 15:34:33 compute-0 systemd[1]: Reloading.
Feb 24 15:34:33 compute-0 systemd-rc-local-generator[166487]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:34:33 compute-0 systemd-sysv-generator[166490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:34:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 24 15:34:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 24 15:34:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 24 15:34:33 compute-0 systemd[1]: run-r5bf839e01bec410b8649b1b12167ad85.service: Deactivated successfully.
Feb 24 15:34:34 compute-0 sudo[166443]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:34 compute-0 sudo[166776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quaujbqogmarxudqlpuvoavwcfpuiwxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947274.4173458-162-245100103252090/AnsiballZ_file.py'
Feb 24 15:34:34 compute-0 sudo[166776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:34 compute-0 python3.9[166779]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 24 15:34:34 compute-0 sudo[166776]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:35 compute-0 sudo[166940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvoopaupzddexcxgpioxetybzrtpdmuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947275.178733-170-262282458318251/AnsiballZ_modprobe.py'
Feb 24 15:34:35 compute-0 sudo[166940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:35 compute-0 podman[166903]: 2026-02-24 15:34:35.695933476 +0000 UTC m=+0.108599728 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Feb 24 15:34:35 compute-0 python3.9[166946]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 24 15:34:35 compute-0 sudo[166940]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:36 compute-0 sudo[167110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhqhzzpemoxzyhnvlrodybztbekylhfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947276.0592027-178-226968434713576/AnsiballZ_stat.py'
Feb 24 15:34:36 compute-0 sudo[167110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:36 compute-0 python3.9[167113]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:34:36 compute-0 sudo[167110]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:36 compute-0 sudo[167234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrlatyytfynxdlkhevllbrrbezxwhuut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947276.0592027-178-226968434713576/AnsiballZ_copy.py'
Feb 24 15:34:36 compute-0 sudo[167234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:37 compute-0 python3.9[167237]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947276.0592027-178-226968434713576/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:37 compute-0 sudo[167234]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:37 compute-0 sudo[167387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycyqonstiprjdcfgxlfzbtyvgyaglxkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947277.5297434-194-263238636637434/AnsiballZ_lineinfile.py'
Feb 24 15:34:37 compute-0 sudo[167387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:38 compute-0 python3.9[167390]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:38 compute-0 sudo[167387]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:38 compute-0 sudo[167540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjkpvixeyffpbstwhvdrejfqpzmkeyph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947278.2461166-202-21617242823099/AnsiballZ_systemd.py'
Feb 24 15:34:38 compute-0 sudo[167540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:39 compute-0 python3.9[167543]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:34:39 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 24 15:34:39 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 24 15:34:39 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 24 15:34:39 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 24 15:34:39 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 24 15:34:39 compute-0 sudo[167540]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:39 compute-0 sudo[167697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eepxiqflqrtikmoskangiovbjcevjphj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947279.4536662-210-174397950174474/AnsiballZ_command.py'
Feb 24 15:34:39 compute-0 sudo[167697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:39 compute-0 python3.9[167700]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:34:39 compute-0 sudo[167697]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:40 compute-0 sudo[167851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhlwzymljhwflqqipzyolfxipyjkaahg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947280.263056-220-36571988169032/AnsiballZ_stat.py'
Feb 24 15:34:40 compute-0 sudo[167851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:40 compute-0 python3.9[167854]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:34:40 compute-0 sudo[167851]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:41 compute-0 sudo[168004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwlybsetshokhqhfcjnpsbhorhpygese ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947281.0441-229-100356827932102/AnsiballZ_stat.py'
Feb 24 15:34:41 compute-0 sudo[168004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:41 compute-0 python3.9[168007]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:34:41 compute-0 sudo[168004]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:41 compute-0 sudo[168128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crhowilbdizvzuojovjvjlimrkyyzjej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947281.0441-229-100356827932102/AnsiballZ_copy.py'
Feb 24 15:34:41 compute-0 sudo[168128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:42 compute-0 python3.9[168131]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947281.0441-229-100356827932102/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:42 compute-0 sudo[168128]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:42 compute-0 sudo[168281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaxnesaaqpizjwmtuuzixgglexcpdbji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947282.297862-244-268688387769047/AnsiballZ_command.py'
Feb 24 15:34:42 compute-0 sudo[168281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:42 compute-0 python3.9[168284]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:34:42 compute-0 sudo[168281]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:43 compute-0 sudo[168435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgmlqhpazzhzojfbonpoffpuwvffmlcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947282.9998705-252-22404281710741/AnsiballZ_lineinfile.py'
Feb 24 15:34:43 compute-0 sudo[168435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:43 compute-0 python3.9[168438]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:43 compute-0 sudo[168435]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:44 compute-0 sudo[168588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbjyjdkkczetrjsegevnczdmzpkekuul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947283.7100194-260-114854745502457/AnsiballZ_replace.py'
Feb 24 15:34:44 compute-0 sudo[168588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:44 compute-0 python3.9[168591]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:44 compute-0 sudo[168588]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:44 compute-0 sudo[168741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auqwgqwdmjeqjekrkesujmfaqkxcniss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947284.6241071-268-45879762931870/AnsiballZ_replace.py'
Feb 24 15:34:44 compute-0 sudo[168741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:45 compute-0 python3.9[168744]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:45 compute-0 sudo[168741]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:45 compute-0 sudo[168894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alwnearplbsgciqhoxdpyzkpksblyego ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947285.3770242-277-85394534800029/AnsiballZ_lineinfile.py'
Feb 24 15:34:45 compute-0 sudo[168894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:45 compute-0 python3.9[168897]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:45 compute-0 sudo[168894]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:46 compute-0 sudo[169047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpiywltdrhqerwcpusfqtmdswfnmbgfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947286.051746-277-142907798948759/AnsiballZ_lineinfile.py'
Feb 24 15:34:46 compute-0 sudo[169047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:46 compute-0 python3.9[169050]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:46 compute-0 sudo[169047]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:47 compute-0 sudo[169200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijglebmiwvrvozmoadcccbrfvjllsced ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947286.7133915-277-150623079822707/AnsiballZ_lineinfile.py'
Feb 24 15:34:47 compute-0 sudo[169200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:47 compute-0 python3.9[169203]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:47 compute-0 sudo[169200]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:47 compute-0 sudo[169353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyojhazcnbyhekaokioapvsvogppkkxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947287.365846-277-98743190976983/AnsiballZ_lineinfile.py'
Feb 24 15:34:47 compute-0 sudo[169353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:47 compute-0 python3.9[169356]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:47 compute-0 sudo[169353]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:48 compute-0 sudo[169506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycuenwpxlhcplbtatafnikjkkytngokp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947288.0577261-306-150789041551432/AnsiballZ_stat.py'
Feb 24 15:34:48 compute-0 sudo[169506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:48 compute-0 python3.9[169509]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:34:48 compute-0 sudo[169506]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:49 compute-0 sudo[169661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvorxbbwatyiddknakrpcmwugysibiwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947288.8004854-314-252744082949506/AnsiballZ_command.py'
Feb 24 15:34:49 compute-0 sudo[169661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:49 compute-0 python3.9[169664]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:34:49 compute-0 sudo[169661]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:49 compute-0 sudo[169815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeduaofimkxioiehnprwvrukagsrtsbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947289.6094441-323-55990939231189/AnsiballZ_systemd_service.py'
Feb 24 15:34:49 compute-0 sudo[169815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:50 compute-0 python3.9[169818]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:34:50 compute-0 sshd-session[169820]: Connection closed by 111.228.14.125 port 48310
Feb 24 15:34:51 compute-0 systemd[1]: Listening on multipathd control socket.
Feb 24 15:34:51 compute-0 sudo[169815]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:51 compute-0 sudo[169973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yboxnflkvohwuontbzjrwkkqmmwnvlwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947291.4617012-331-211262632750952/AnsiballZ_systemd_service.py'
Feb 24 15:34:51 compute-0 sudo[169973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:52 compute-0 python3.9[169976]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:34:52 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb 24 15:34:52 compute-0 udevadm[169981]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb 24 15:34:52 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb 24 15:34:52 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 24 15:34:52 compute-0 multipathd[169984]: --------start up--------
Feb 24 15:34:52 compute-0 multipathd[169984]: read /etc/multipath.conf
Feb 24 15:34:52 compute-0 multipathd[169984]: path checkers start up
Feb 24 15:34:52 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 24 15:34:52 compute-0 sudo[169973]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:52 compute-0 sudo[170142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocgogdgdialhiqdgrzbgalhpruqdewpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947292.6104372-343-281413601254542/AnsiballZ_file.py'
Feb 24 15:34:52 compute-0 sudo[170142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:53 compute-0 python3.9[170145]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 24 15:34:53 compute-0 sudo[170142]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:53 compute-0 sudo[170295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzuuhcswvbygivahwexbiuksxbthutvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947293.3266826-351-27402367520920/AnsiballZ_modprobe.py'
Feb 24 15:34:53 compute-0 sudo[170295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:53 compute-0 python3.9[170298]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 24 15:34:53 compute-0 kernel: Key type psk registered
Feb 24 15:34:53 compute-0 sudo[170295]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:54 compute-0 sudo[170458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvlcpwclosaxjemcftuekiroivtwpdea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947294.1129885-359-158231019102444/AnsiballZ_stat.py'
Feb 24 15:34:54 compute-0 sudo[170458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:54 compute-0 python3.9[170461]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:34:54 compute-0 sudo[170458]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:55 compute-0 sudo[170582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpimeulwvaceldrusidncymkblfergvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947294.1129885-359-158231019102444/AnsiballZ_copy.py'
Feb 24 15:34:55 compute-0 sudo[170582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:55 compute-0 python3.9[170585]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947294.1129885-359-158231019102444/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:55 compute-0 sudo[170582]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:34:55.687 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:34:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:34:55.688 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:34:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:34:55.688 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:34:56 compute-0 sudo[170735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cicyskaumyihposvdxxqaspeoykqsjli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947295.9759421-375-226328390779518/AnsiballZ_lineinfile.py'
Feb 24 15:34:56 compute-0 sudo[170735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:56 compute-0 python3.9[170738]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:34:56 compute-0 sudo[170735]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:57 compute-0 sudo[170888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zthzlckhqoyupjuipptgcvjgbopsvklo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947296.7114336-383-154106715848984/AnsiballZ_systemd.py'
Feb 24 15:34:57 compute-0 sudo[170888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:57 compute-0 python3.9[170891]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:34:57 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 24 15:34:57 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 24 15:34:57 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 24 15:34:57 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 24 15:34:57 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 24 15:34:57 compute-0 sudo[170888]: pam_unix(sudo:session): session closed for user root
Feb 24 15:34:57 compute-0 podman[170893]: 2026-02-24 15:34:57.393061852 +0000 UTC m=+0.073001440 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 24 15:34:58 compute-0 sudo[171064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnkpwrwngvgyhjzxhnvhzqssastngrch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947297.6723719-391-82511438231743/AnsiballZ_dnf.py'
Feb 24 15:34:58 compute-0 sudo[171064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:34:58 compute-0 python3.9[171067]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:35:00 compute-0 systemd[1]: Reloading.
Feb 24 15:35:00 compute-0 systemd-rc-local-generator[171090]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:35:01 compute-0 systemd-sysv-generator[171095]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:35:01 compute-0 systemd[1]: Reloading.
Feb 24 15:35:01 compute-0 systemd-rc-local-generator[171133]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:35:01 compute-0 systemd-sysv-generator[171139]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:35:01 compute-0 systemd-logind[813]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 24 15:35:01 compute-0 systemd-logind[813]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 24 15:35:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 24 15:35:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 24 15:35:01 compute-0 systemd[1]: Reloading.
Feb 24 15:35:01 compute-0 systemd-rc-local-generator[171241]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:35:01 compute-0 systemd-sysv-generator[171244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:35:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 24 15:35:02 compute-0 sudo[171064]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:02 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 24 15:35:02 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 24 15:35:02 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.301s CPU time.
Feb 24 15:35:02 compute-0 systemd[1]: run-ra41fb3af90b8416b9b0aa410ee95087c.service: Deactivated successfully.
Feb 24 15:35:02 compute-0 sudo[172560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oclydonqavrmlxpuwpvkbqqhmbjqvhye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947302.4923613-399-144250162601398/AnsiballZ_systemd_service.py'
Feb 24 15:35:02 compute-0 sudo[172560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:03 compute-0 python3.9[172563]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:35:03 compute-0 systemd[1]: Stopping Open-iSCSI...
Feb 24 15:35:03 compute-0 iscsid[165972]: iscsid shutting down.
Feb 24 15:35:03 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Feb 24 15:35:03 compute-0 systemd[1]: Stopped Open-iSCSI.
Feb 24 15:35:03 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 24 15:35:03 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 24 15:35:03 compute-0 systemd[1]: Started Open-iSCSI.
Feb 24 15:35:03 compute-0 sudo[172560]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:03 compute-0 sudo[172717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgwfchctkgesndujqhbfqilwbrkynxac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947303.414241-407-109040951845591/AnsiballZ_systemd_service.py'
Feb 24 15:35:03 compute-0 sudo[172717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:03 compute-0 python3.9[172720]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:35:04 compute-0 multipathd[169984]: exit (signal)
Feb 24 15:35:04 compute-0 multipathd[169984]: --------shut down-------
Feb 24 15:35:04 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb 24 15:35:04 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Feb 24 15:35:04 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb 24 15:35:04 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 24 15:35:04 compute-0 multipathd[172726]: --------start up--------
Feb 24 15:35:04 compute-0 multipathd[172726]: read /etc/multipath.conf
Feb 24 15:35:04 compute-0 multipathd[172726]: path checkers start up
Feb 24 15:35:04 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 24 15:35:04 compute-0 sudo[172717]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:04 compute-0 python3.9[172884]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:35:05 compute-0 sudo[173047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaxyzxqsvtvodlnhfeophinqcyfvoubm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947305.489815-425-20297413255252/AnsiballZ_file.py'
Feb 24 15:35:05 compute-0 sudo[173047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:05 compute-0 podman[173012]: 2026-02-24 15:35:05.875139401 +0000 UTC m=+0.113015879 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0)
Feb 24 15:35:06 compute-0 python3.9[173057]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:06 compute-0 sudo[173047]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:06 compute-0 sudo[173216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vczitvfvxruhsbbbujkzytfdjmknecsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947306.480255-436-258812373352659/AnsiballZ_systemd_service.py'
Feb 24 15:35:06 compute-0 sudo[173216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:07 compute-0 python3.9[173219]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:35:07 compute-0 systemd[1]: Reloading.
Feb 24 15:35:07 compute-0 systemd-sysv-generator[173247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:35:07 compute-0 systemd-rc-local-generator[173242]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:35:07 compute-0 sudo[173216]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:08 compute-0 python3.9[173411]: ansible-ansible.builtin.service_facts Invoked
Feb 24 15:35:08 compute-0 network[173428]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 24 15:35:08 compute-0 network[173429]: 'network-scripts' will be removed from distribution in near future.
Feb 24 15:35:08 compute-0 network[173430]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 24 15:35:11 compute-0 sudo[173701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljwwncbceehvttosaalmtfiuoiefpwmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947311.1822934-455-268164246426710/AnsiballZ_systemd_service.py'
Feb 24 15:35:11 compute-0 sudo[173701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:11 compute-0 python3.9[173704]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:35:11 compute-0 sudo[173701]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:12 compute-0 sudo[173855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqewiopwsgkfltyobxjyjzhbfdlfmmtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947311.8891122-455-145872403229506/AnsiballZ_systemd_service.py'
Feb 24 15:35:12 compute-0 sudo[173855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:12 compute-0 python3.9[173858]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:35:12 compute-0 sudo[173855]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:12 compute-0 sudo[174009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqvjzvmgmonwioqmcycljlgizhcwdesf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947312.6902628-455-233086261335576/AnsiballZ_systemd_service.py'
Feb 24 15:35:12 compute-0 sudo[174009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:13 compute-0 python3.9[174012]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:35:13 compute-0 sudo[174009]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:13 compute-0 sudo[174163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slsjgapzlrqhrknicxkojxagdbvdrvjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947313.4440286-455-271974610741952/AnsiballZ_systemd_service.py'
Feb 24 15:35:13 compute-0 sudo[174163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:14 compute-0 python3.9[174166]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:35:14 compute-0 sudo[174163]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:14 compute-0 sudo[174317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egyrpqovvkwvnlbjbquyehgueaivngiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947314.156432-455-168315560612695/AnsiballZ_systemd_service.py'
Feb 24 15:35:14 compute-0 sudo[174317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:14 compute-0 python3.9[174320]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:35:14 compute-0 sudo[174317]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:15 compute-0 sudo[174471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuoibwyqqnjhdiadkvpjuvqspiwsspyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947314.878937-455-5315330790652/AnsiballZ_systemd_service.py'
Feb 24 15:35:15 compute-0 sudo[174471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:15 compute-0 python3.9[174474]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:35:15 compute-0 sudo[174471]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:16 compute-0 sudo[174625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydpfxtorvqzsrqsdqazvdxfqdxxpnvma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947315.7070105-455-235486320544381/AnsiballZ_systemd_service.py'
Feb 24 15:35:16 compute-0 sudo[174625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:16 compute-0 python3.9[174628]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:35:16 compute-0 sudo[174625]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:16 compute-0 sudo[174779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixgcpizsqbktxryzkoncmmesiwjjoaym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947316.4968374-455-83420223202215/AnsiballZ_systemd_service.py'
Feb 24 15:35:16 compute-0 sudo[174779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:17 compute-0 python3.9[174782]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:35:17 compute-0 sudo[174779]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:17 compute-0 sudo[174933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aufjrvsmnynashereviirqmvohgmrzzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947317.5515773-514-121967866004301/AnsiballZ_file.py'
Feb 24 15:35:17 compute-0 sudo[174933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:18 compute-0 python3.9[174936]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:18 compute-0 sudo[174933]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:18 compute-0 sudo[175086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cehndapajesjsnlnueysjarsnxtmcxnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947318.2561924-514-84832443102352/AnsiballZ_file.py'
Feb 24 15:35:18 compute-0 sudo[175086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:18 compute-0 python3.9[175089]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:18 compute-0 sudo[175086]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:19 compute-0 sudo[175239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eskfkzozkolvvggusyrnbzyksdhmhfsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947318.885018-514-89249436826456/AnsiballZ_file.py'
Feb 24 15:35:19 compute-0 sudo[175239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:19 compute-0 python3.9[175242]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:19 compute-0 sudo[175239]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:19 compute-0 sudo[175392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcbzjdlgfmizoxbptubvbhvrghvpmaww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947319.5502057-514-185319158452627/AnsiballZ_file.py'
Feb 24 15:35:19 compute-0 sudo[175392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:20 compute-0 python3.9[175395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:20 compute-0 sudo[175392]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:20 compute-0 sudo[175545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezxvnvncophrxxuznkgnptuoadzktjrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947320.1954436-514-119860129360101/AnsiballZ_file.py'
Feb 24 15:35:20 compute-0 sudo[175545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:20 compute-0 python3.9[175548]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:20 compute-0 sudo[175545]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:21 compute-0 sudo[175698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oppnlzdjakgiwlakoopyuxmthhsvtaro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947320.9056675-514-86575013175812/AnsiballZ_file.py'
Feb 24 15:35:21 compute-0 sudo[175698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:21 compute-0 python3.9[175701]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:21 compute-0 sudo[175698]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:21 compute-0 sudo[175852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slwjzoikubbpoisbfpspxuxatgeyocrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947321.5310867-514-146918296679368/AnsiballZ_file.py'
Feb 24 15:35:21 compute-0 sudo[175852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:22 compute-0 python3.9[175855]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:22 compute-0 sudo[175852]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:22 compute-0 sudo[176005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eipptbljwwdrepvqsajeriuatlkflirt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947322.185764-514-224195703076232/AnsiballZ_file.py'
Feb 24 15:35:22 compute-0 sudo[176005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:22 compute-0 python3.9[176008]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:22 compute-0 sudo[176005]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:23 compute-0 sudo[176158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rydeycvchaoybtsicpfarxfbvzukwefu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947322.8873138-571-233151431527275/AnsiballZ_file.py'
Feb 24 15:35:23 compute-0 sudo[176158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:23 compute-0 python3.9[176161]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:23 compute-0 sudo[176158]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:23 compute-0 sudo[176312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idjhadydeypbhszubqdbckpofkqrhenc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947323.5457752-571-69272378890198/AnsiballZ_file.py'
Feb 24 15:35:23 compute-0 sudo[176312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:24 compute-0 python3.9[176315]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:24 compute-0 sudo[176312]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:24 compute-0 sshd-session[175731]: Invalid user Administrator from 80.94.95.116 port 36610
Feb 24 15:35:24 compute-0 sshd-session[175731]: Connection closed by invalid user Administrator 80.94.95.116 port 36610 [preauth]
Feb 24 15:35:24 compute-0 sudo[176465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzzyicbmakrrlcnhxeiwjflceowtsyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947324.2353506-571-76272718364270/AnsiballZ_file.py'
Feb 24 15:35:24 compute-0 sudo[176465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:24 compute-0 python3.9[176468]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:24 compute-0 sudo[176465]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:25 compute-0 sudo[176618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaaphtkuzehqoxvmuibemrykcjdaejod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947324.911638-571-103908075527886/AnsiballZ_file.py'
Feb 24 15:35:25 compute-0 sudo[176618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:25 compute-0 python3.9[176621]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:25 compute-0 sudo[176618]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:25 compute-0 sudo[176771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umirhrqjgtohoclsayvhdefuhnjhfged ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947325.592446-571-178838721044983/AnsiballZ_file.py'
Feb 24 15:35:25 compute-0 sudo[176771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:26 compute-0 python3.9[176774]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:26 compute-0 sudo[176771]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:26 compute-0 sudo[176924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkzekufgpfhkgeeraucmfdrdsveugcpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947326.2618957-571-276784024398865/AnsiballZ_file.py'
Feb 24 15:35:26 compute-0 sudo[176924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:26 compute-0 python3.9[176927]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:26 compute-0 sudo[176924]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:27 compute-0 sudo[177077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfmwrvtbnvcewlzfpotwunctzezyrfts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947326.9126887-571-171748540944566/AnsiballZ_file.py'
Feb 24 15:35:27 compute-0 sudo[177077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:27 compute-0 python3.9[177080]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:27 compute-0 sudo[177077]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:27 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb 24 15:35:27 compute-0 podman[177081]: 2026-02-24 15:35:27.57146719 +0000 UTC m=+0.063559036 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 24 15:35:27 compute-0 sudo[177248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muvhilwjhiwgdsxcsfenbmwlutqmegtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947327.666122-571-57885029660029/AnsiballZ_file.py'
Feb 24 15:35:27 compute-0 sudo[177248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:28 compute-0 python3.9[177251]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:35:28 compute-0 sudo[177248]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:28 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 24 15:35:28 compute-0 sudo[177402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezalftaepaprgvicwvvwcejoqbgidblh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947328.4685743-629-274844470108990/AnsiballZ_command.py'
Feb 24 15:35:28 compute-0 sudo[177402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:28 compute-0 python3.9[177405]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:35:28 compute-0 sudo[177402]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:29 compute-0 python3.9[177557]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 24 15:35:29 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Feb 24 15:35:30 compute-0 sudo[177708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epoljyyjwdjghsvnqcueqnrvlcrjoopv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947330.2017066-647-207928534458431/AnsiballZ_systemd_service.py'
Feb 24 15:35:30 compute-0 sudo[177708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:30 compute-0 python3.9[177711]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:35:30 compute-0 systemd[1]: Reloading.
Feb 24 15:35:30 compute-0 systemd-rc-local-generator[177736]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:35:30 compute-0 systemd-sysv-generator[177743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:35:31 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 24 15:35:31 compute-0 sudo[177708]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:31 compute-0 sudo[177904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avvifrxqiocchbyhhwzyvqrdbzjnlddu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947331.3616936-655-271338795061181/AnsiballZ_command.py'
Feb 24 15:35:31 compute-0 sudo[177904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:31 compute-0 python3.9[177907]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:35:31 compute-0 sudo[177904]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:32 compute-0 sudo[178058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlidvjzqdmrdodoyxyhwkqnfnwtvqknv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947331.9883697-655-94698601368195/AnsiballZ_command.py'
Feb 24 15:35:32 compute-0 sudo[178058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:32 compute-0 python3.9[178061]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:35:32 compute-0 sudo[178058]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:32 compute-0 sudo[178212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqpvxerhypbsqhhrtjfhbciwfpdlcygm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947332.6894171-655-250633545978684/AnsiballZ_command.py'
Feb 24 15:35:32 compute-0 sudo[178212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:33 compute-0 python3.9[178215]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:35:33 compute-0 sudo[178212]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:33 compute-0 sudo[178366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvtlmqkeslhgrgvdcpplkzevoranzvrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947333.3412917-655-243770528879716/AnsiballZ_command.py'
Feb 24 15:35:33 compute-0 sudo[178366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:33 compute-0 python3.9[178369]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:35:33 compute-0 sudo[178366]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:34 compute-0 sudo[178520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwjzqrzybabcfnqgfdxfwqryximvmtdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947334.0192115-655-136473418928538/AnsiballZ_command.py'
Feb 24 15:35:34 compute-0 sudo[178520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:34 compute-0 python3.9[178523]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:35:34 compute-0 sudo[178520]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:35 compute-0 sudo[178674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exvuyqwtvthswpabbnqdalkinzkirpum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947334.7143486-655-86299216975535/AnsiballZ_command.py'
Feb 24 15:35:35 compute-0 sudo[178674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:35 compute-0 python3.9[178677]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:35:35 compute-0 sudo[178674]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:35 compute-0 sudo[178828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eslfflpqcrvjcrhkmgqnxwejsxxtsvfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947335.3141682-655-40934293818129/AnsiballZ_command.py'
Feb 24 15:35:35 compute-0 sudo[178828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:35 compute-0 python3.9[178831]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:35:35 compute-0 sudo[178828]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:36 compute-0 podman[178909]: 2026-02-24 15:35:36.228927748 +0000 UTC m=+0.177048472 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 24 15:35:36 compute-0 sudo[179008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aztowjtrpademegcbxoeqvsvdrwmxztu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947335.9370494-655-252524837995464/AnsiballZ_command.py'
Feb 24 15:35:36 compute-0 sudo[179008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:36 compute-0 python3.9[179011]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:35:36 compute-0 sudo[179008]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:37 compute-0 sudo[179163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjpocijiblaptnhdyhghaicctcpajnfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947337.548864-734-278788519055311/AnsiballZ_file.py'
Feb 24 15:35:37 compute-0 sudo[179163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:38 compute-0 python3.9[179166]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:38 compute-0 sudo[179163]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:38 compute-0 sudo[179316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqrwbjtivoovukuhoxepglpswbuiwgpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947338.2181766-734-89410648919251/AnsiballZ_file.py'
Feb 24 15:35:38 compute-0 sudo[179316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:38 compute-0 python3.9[179319]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:38 compute-0 sudo[179316]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:39 compute-0 sudo[179469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqwhlztdlvlsaptmkoalzbthvpguabia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947338.9715755-749-66698440081723/AnsiballZ_file.py'
Feb 24 15:35:39 compute-0 sudo[179469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:39 compute-0 python3.9[179472]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:39 compute-0 sudo[179469]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:39 compute-0 sudo[179622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkkwspihedkmtmifspcldirqizohzvca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947339.6264393-749-27615927768139/AnsiballZ_file.py'
Feb 24 15:35:39 compute-0 sudo[179622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:40 compute-0 python3.9[179625]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:40 compute-0 sudo[179622]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:40 compute-0 sudo[179775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlexxtlqeyxydyykjufvihipbnkjwvth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947340.329173-749-266000463153504/AnsiballZ_file.py'
Feb 24 15:35:40 compute-0 sudo[179775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:40 compute-0 python3.9[179778]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:40 compute-0 sudo[179775]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:41 compute-0 sudo[179928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwhhpumsfbifuzggribivdsvcpcvrgcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947341.0180786-749-100776763479187/AnsiballZ_file.py'
Feb 24 15:35:41 compute-0 sudo[179928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:41 compute-0 python3.9[179931]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:41 compute-0 sudo[179928]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:41 compute-0 sudo[180081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onhmctzzyksfbdpfiwwncfmskbzcbzml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947341.6760612-749-243833059070815/AnsiballZ_file.py'
Feb 24 15:35:41 compute-0 sudo[180081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:42 compute-0 python3.9[180084]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:42 compute-0 sudo[180081]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:42 compute-0 sudo[180234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqzcjagprltadsusiqhmvlnhxlmcswdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947342.2027788-749-179735635537267/AnsiballZ_file.py'
Feb 24 15:35:42 compute-0 sudo[180234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:42 compute-0 python3.9[180237]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:42 compute-0 sudo[180234]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:43 compute-0 sudo[180387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnwpuwuahgugkphbwwjrvtujkahczomi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947342.838754-749-44579747488211/AnsiballZ_file.py'
Feb 24 15:35:43 compute-0 sudo[180387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:43 compute-0 python3.9[180390]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:43 compute-0 sudo[180387]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:49 compute-0 sudo[180540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocfcfqbxdgvpfsezmnwnwetlinitjmnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947349.016875-958-129783713274868/AnsiballZ_getent.py'
Feb 24 15:35:49 compute-0 sudo[180540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:49 compute-0 python3.9[180543]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 24 15:35:49 compute-0 sudo[180540]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:50 compute-0 sudo[180694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phlwwmrggqtomoweorhinzfutgwvvnfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947349.872354-966-23056646919125/AnsiballZ_group.py'
Feb 24 15:35:50 compute-0 sudo[180694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:50 compute-0 python3.9[180697]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 24 15:35:50 compute-0 groupadd[180698]: group added to /etc/group: name=nova, GID=42436
Feb 24 15:35:50 compute-0 groupadd[180698]: group added to /etc/gshadow: name=nova
Feb 24 15:35:50 compute-0 groupadd[180698]: new group: name=nova, GID=42436
Feb 24 15:35:50 compute-0 sudo[180694]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:51 compute-0 sudo[180853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfudwyvaagqfeamhmagncbmrllzqyolv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947350.8049912-974-236435632948230/AnsiballZ_user.py'
Feb 24 15:35:51 compute-0 sudo[180853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:35:51 compute-0 python3.9[180856]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 24 15:35:51 compute-0 useradd[180858]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/1
Feb 24 15:35:51 compute-0 useradd[180858]: add 'nova' to group 'libvirt'
Feb 24 15:35:51 compute-0 useradd[180858]: add 'nova' to shadow group 'libvirt'
Feb 24 15:35:51 compute-0 sudo[180853]: pam_unix(sudo:session): session closed for user root
Feb 24 15:35:52 compute-0 sshd-session[180889]: Accepted publickey for zuul from 192.168.122.30 port 54402 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:35:52 compute-0 systemd-logind[813]: New session 24 of user zuul.
Feb 24 15:35:52 compute-0 systemd[1]: Started Session 24 of User zuul.
Feb 24 15:35:52 compute-0 sshd-session[180889]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:35:52 compute-0 sshd-session[180892]: Received disconnect from 192.168.122.30 port 54402:11: disconnected by user
Feb 24 15:35:52 compute-0 sshd-session[180892]: Disconnected from user zuul 192.168.122.30 port 54402
Feb 24 15:35:52 compute-0 sshd-session[180889]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:35:52 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Feb 24 15:35:52 compute-0 systemd-logind[813]: Session 24 logged out. Waiting for processes to exit.
Feb 24 15:35:52 compute-0 systemd-logind[813]: Removed session 24.
Feb 24 15:35:53 compute-0 python3.9[181042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:35:53 compute-0 python3.9[181118]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:54 compute-0 python3.9[181268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:35:55 compute-0 python3.9[181389]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947354.0695891-999-243950820091544/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:35:55.689 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:35:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:35:55.689 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:35:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:35:55.689 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:35:55 compute-0 python3.9[181539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:35:56 compute-0 python3.9[181660]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947355.2752573-999-85133559046714/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:57 compute-0 python3.9[181810]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:35:57 compute-0 podman[181905]: 2026-02-24 15:35:57.742642326 +0000 UTC m=+0.073066674 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 15:35:57 compute-0 python3.9[181942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947356.5818257-999-273796125371926/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:58 compute-0 python3.9[182098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:35:59 compute-0 python3.9[182219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947358.0899284-1053-231873699190524/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:35:59 compute-0 sudo[182369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olahqqyzkyqghsyzqljeaouvrqcxegtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947359.5895414-1068-148032977536752/AnsiballZ_file.py'
Feb 24 15:35:59 compute-0 sudo[182369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:00 compute-0 python3.9[182372]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:00 compute-0 sudo[182369]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:00 compute-0 sudo[182522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tujcnlluxnzksenggxrdicpnfyxageml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947360.2917244-1076-155123197330611/AnsiballZ_copy.py'
Feb 24 15:36:00 compute-0 sudo[182522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:00 compute-0 python3.9[182525]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:00 compute-0 sudo[182522]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:01 compute-0 sudo[182675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zabiiegjtshzqerxjdowpxpmumdrtihs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947360.890153-1084-137494884614518/AnsiballZ_stat.py'
Feb 24 15:36:01 compute-0 sudo[182675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:01 compute-0 python3.9[182678]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:36:01 compute-0 sudo[182675]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:01 compute-0 sudo[182828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlzdyzhzdabulrphahicfyowlsqibmxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947361.5797513-1092-95002684023647/AnsiballZ_stat.py'
Feb 24 15:36:01 compute-0 sudo[182828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:02 compute-0 python3.9[182831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:36:02 compute-0 sudo[182828]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:02 compute-0 sudo[182952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khmbcvvpgmgzlndxlsthtlfhxpopvggs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947361.5797513-1092-95002684023647/AnsiballZ_copy.py'
Feb 24 15:36:02 compute-0 sudo[182952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:02 compute-0 python3.9[182955]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1771947361.5797513-1092-95002684023647/.source _original_basename=.b4dxv0wi follow=False checksum=283440766a4e51ee5b6bddddf19f8e8213a8aed2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb 24 15:36:02 compute-0 sudo[182952]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:03 compute-0 python3.9[183107]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:36:04 compute-0 sudo[183259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgsswrioyvubcdcftwducabgujdnmtcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947363.8275707-1120-127350634977582/AnsiballZ_file.py'
Feb 24 15:36:04 compute-0 sudo[183259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:04 compute-0 python3.9[183262]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:04 compute-0 sudo[183259]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:04 compute-0 sudo[183412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocapabfikocxeoeipzjujmtoeqmmvtds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947364.529903-1128-143209305318799/AnsiballZ_file.py'
Feb 24 15:36:04 compute-0 sudo[183412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:04 compute-0 python3.9[183415]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:36:04 compute-0 sudo[183412]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:05 compute-0 python3.9[183565]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:06 compute-0 podman[183810]: 2026-02-24 15:36:06.918002597 +0000 UTC m=+0.108478925 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:36:07 compute-0 sudo[184012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cytczlajdkjavjfccuyfsazvfgutxpav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947367.4271948-1162-19670488100293/AnsiballZ_container_config_data.py'
Feb 24 15:36:07 compute-0 sudo[184012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:08 compute-0 python3.9[184015]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False
Feb 24 15:36:08 compute-0 sudo[184012]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:09 compute-0 sudo[184165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmvnjxvnkctlkjejglaskfohrzkfqqkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947368.5901315-1173-251552603352509/AnsiballZ_container_config_hash.py'
Feb 24 15:36:09 compute-0 sudo[184165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:09 compute-0 python3.9[184168]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:36:09 compute-0 sudo[184165]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:10 compute-0 sudo[184318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taiqdzrmzbbpmhlajxateopjohfsaczx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947369.7274778-1183-126469223607194/AnsiballZ_edpm_container_manage.py'
Feb 24 15:36:10 compute-0 sudo[184318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:10 compute-0 python3[184321]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:36:10 compute-0 podman[184356]: 2026-02-24 15:36:10.663971366 +0000 UTC m=+0.064831927 container create 64d069e26960e8ce722f43e4b4daf74401a70d2648d3b479d6fc45a074e065c6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, container_name=nova_compute_init, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, config_id=nova_compute_init, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Feb 24 15:36:10 compute-0 podman[184356]: 2026-02-24 15:36:10.632220786 +0000 UTC m=+0.033081387 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 24 15:36:10 compute-0 python3[184321]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 24 15:36:10 compute-0 sudo[184318]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:11 compute-0 sudo[184544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhoyqdlyjdrcllqzpvnhmbgnepasslhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947371.0050378-1191-150840058487693/AnsiballZ_stat.py'
Feb 24 15:36:11 compute-0 sudo[184544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:11 compute-0 python3.9[184547]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:36:11 compute-0 sudo[184544]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:12 compute-0 python3.9[184699]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:36:13 compute-0 sudo[184849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjfjgohwvzsndkavaavlpjkxgcysteab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947373.03344-1218-9004527626889/AnsiballZ_stat.py'
Feb 24 15:36:13 compute-0 sudo[184849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:13 compute-0 python3.9[184852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:36:13 compute-0 sudo[184849]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:14 compute-0 sudo[184975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yslcyieisjazjeczfppklfvpgbfmtglk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947373.03344-1218-9004527626889/AnsiballZ_copy.py'
Feb 24 15:36:14 compute-0 sudo[184975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:14 compute-0 python3.9[184978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947373.03344-1218-9004527626889/.source.yaml _original_basename=.y6mx6q4g follow=False checksum=9f004bebcf6c7d2ae504b7d08cd5581727838ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:14 compute-0 sudo[184975]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:15 compute-0 sudo[185128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olbypznqizqkasgrguaqyvmvosxqyprw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947374.7719493-1235-222083750330373/AnsiballZ_file.py'
Feb 24 15:36:15 compute-0 sudo[185128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:15 compute-0 python3.9[185131]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:15 compute-0 sudo[185128]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:15 compute-0 sudo[185281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acpvxazclrbpmbcjcsnirmzoxbugevdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947375.474485-1243-230791341292099/AnsiballZ_file.py'
Feb 24 15:36:15 compute-0 sudo[185281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:15 compute-0 python3.9[185284]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:36:15 compute-0 sudo[185281]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:16 compute-0 sudo[185434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oanmhffhuaouorlndmbsozyvodqrdysu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947376.190451-1251-206243092907548/AnsiballZ_stat.py'
Feb 24 15:36:16 compute-0 sudo[185434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:16 compute-0 python3.9[185437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:36:16 compute-0 sudo[185434]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:17 compute-0 sudo[185558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyfdfzwnjkugbcazepvrfsvlzuvqlxeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947376.190451-1251-206243092907548/AnsiballZ_copy.py'
Feb 24 15:36:17 compute-0 sudo[185558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:17 compute-0 python3.9[185561]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/nova_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947376.190451-1251-206243092907548/.source.json _original_basename=.isd0nltp follow=False checksum=0018389a48392615f4a8869cad43008a907328ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:17 compute-0 sudo[185558]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:18 compute-0 python3.9[185711]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:20 compute-0 sudo[186132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rftfadcdukgfwpbsnkumgglvgktudsja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947379.8807545-1291-101947147693969/AnsiballZ_container_config_data.py'
Feb 24 15:36:20 compute-0 sudo[186132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:20 compute-0 python3.9[186135]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False
Feb 24 15:36:20 compute-0 sudo[186132]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:21 compute-0 sudo[186285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkabdcnvcqcxitbsnkuddoijsohtxeej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947380.8761218-1302-221510201680387/AnsiballZ_container_config_hash.py'
Feb 24 15:36:21 compute-0 sudo[186285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:21 compute-0 python3.9[186288]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:36:21 compute-0 sudo[186285]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:22 compute-0 sudo[186438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihiigjxkpjocftilmlgkdkxaxsohegye ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947381.7634451-1312-1652842214030/AnsiballZ_edpm_container_manage.py'
Feb 24 15:36:22 compute-0 sudo[186438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:22 compute-0 python3[186441]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:36:22 compute-0 podman[186479]: 2026-02-24 15:36:22.673841831 +0000 UTC m=+0.054647848 container create 6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, config_id=nova_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:36:22 compute-0 podman[186479]: 2026-02-24 15:36:22.651206841 +0000 UTC m=+0.032012848 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 24 15:36:22 compute-0 python3[186441]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 24 15:36:22 compute-0 sudo[186438]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:23 compute-0 sudo[186667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caaerrncuksxpgrwymyzajapikxlxczi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947383.0663128-1320-68505223823853/AnsiballZ_stat.py'
Feb 24 15:36:23 compute-0 sudo[186667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:23 compute-0 python3.9[186670]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:36:23 compute-0 sudo[186667]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:24 compute-0 sudo[186822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njusfkvstnccxbnyppaqjejexgpeqwxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947383.816684-1329-166400983593806/AnsiballZ_file.py'
Feb 24 15:36:24 compute-0 sudo[186822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:24 compute-0 python3.9[186825]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:24 compute-0 sudo[186822]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:24 compute-0 sudo[186899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gefnqcgujuhvvlsujsalpzstpimfisyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947383.816684-1329-166400983593806/AnsiballZ_stat.py'
Feb 24 15:36:24 compute-0 sudo[186899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:24 compute-0 python3.9[186902]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:36:24 compute-0 sudo[186899]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:25 compute-0 sudo[187051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzvtmnxlkksbzgvfavqylgqjgycgbujd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947384.8640234-1329-57239251859217/AnsiballZ_copy.py'
Feb 24 15:36:25 compute-0 sudo[187051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:25 compute-0 python3.9[187054]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771947384.8640234-1329-57239251859217/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:25 compute-0 sudo[187051]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:25 compute-0 sudo[187128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpwajafccawlwfmndxxgtwhicndzaplw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947384.8640234-1329-57239251859217/AnsiballZ_systemd.py'
Feb 24 15:36:25 compute-0 sudo[187128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:26 compute-0 python3.9[187131]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:36:26 compute-0 systemd[1]: Reloading.
Feb 24 15:36:26 compute-0 systemd-sysv-generator[187157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:36:26 compute-0 systemd-rc-local-generator[187154]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:36:26 compute-0 sudo[187128]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:26 compute-0 sudo[187247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjabbvwprftiqcweugewkzqcvgptkcvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947384.8640234-1329-57239251859217/AnsiballZ_systemd.py'
Feb 24 15:36:26 compute-0 sudo[187247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:26 compute-0 python3.9[187250]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:36:27 compute-0 systemd[1]: Reloading.
Feb 24 15:36:27 compute-0 systemd-rc-local-generator[187274]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:36:27 compute-0 systemd-sysv-generator[187278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:36:27 compute-0 systemd[1]: Starting nova_compute container...
Feb 24 15:36:27 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:27 compute-0 podman[187297]: 2026-02-24 15:36:27.440357217 +0000 UTC m=+0.128784050 container init 6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=nova_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 15:36:27 compute-0 podman[187297]: 2026-02-24 15:36:27.455660277 +0000 UTC m=+0.144087070 container start 6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 24 15:36:27 compute-0 podman[187297]: nova_compute
Feb 24 15:36:27 compute-0 nova_compute[187312]: + sudo -E kolla_set_configs
Feb 24 15:36:27 compute-0 systemd[1]: Started nova_compute container.
Feb 24 15:36:27 compute-0 sudo[187247]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Validating config file
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Copying service configuration files
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Deleting /etc/ceph
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Creating directory /etc/ceph
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /etc/ceph
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Writing out command to execute
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 24 15:36:27 compute-0 nova_compute[187312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 24 15:36:27 compute-0 nova_compute[187312]: ++ cat /run_command
Feb 24 15:36:27 compute-0 nova_compute[187312]: + CMD=nova-compute
Feb 24 15:36:27 compute-0 nova_compute[187312]: + ARGS=
Feb 24 15:36:27 compute-0 nova_compute[187312]: + sudo kolla_copy_cacerts
Feb 24 15:36:27 compute-0 nova_compute[187312]: + [[ ! -n '' ]]
Feb 24 15:36:27 compute-0 nova_compute[187312]: + . kolla_extend_start
Feb 24 15:36:27 compute-0 nova_compute[187312]: Running command: 'nova-compute'
Feb 24 15:36:27 compute-0 nova_compute[187312]: + echo 'Running command: '\''nova-compute'\'''
Feb 24 15:36:27 compute-0 nova_compute[187312]: + umask 0022
Feb 24 15:36:27 compute-0 nova_compute[187312]: + exec nova-compute
Feb 24 15:36:28 compute-0 podman[187405]: 2026-02-24 15:36:28.095459339 +0000 UTC m=+0.059821121 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:36:28 compute-0 python3.9[187490]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:36:29 compute-0 nova_compute[187312]: 2026-02-24 15:36:29.294 187316 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 24 15:36:29 compute-0 nova_compute[187312]: 2026-02-24 15:36:29.294 187316 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 24 15:36:29 compute-0 nova_compute[187312]: 2026-02-24 15:36:29.294 187316 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 24 15:36:29 compute-0 nova_compute[187312]: 2026-02-24 15:36:29.294 187316 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 24 15:36:29 compute-0 sudo[187643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usrtsewcijplphsjkbfjyhrfexvgozat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947389.0937498-1374-221955864379068/AnsiballZ_stat.py'
Feb 24 15:36:29 compute-0 sudo[187643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:29 compute-0 nova_compute[187312]: 2026-02-24 15:36:29.405 187316 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:36:29 compute-0 nova_compute[187312]: 2026-02-24 15:36:29.414 187316 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:36:29 compute-0 nova_compute[187312]: 2026-02-24 15:36:29.414 187316 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 24 15:36:29 compute-0 python3.9[187646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:36:29 compute-0 sudo[187643]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:30 compute-0 sudo[187771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukftgctzbvfutnwdqskdgbkrkihmouzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947389.0937498-1374-221955864379068/AnsiballZ_copy.py'
Feb 24 15:36:30 compute-0 sudo[187771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:30 compute-0 python3.9[187774]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947389.0937498-1374-221955864379068/.source.yaml _original_basename=.otlxi9yc follow=False checksum=576a4462fd3b80cc29f3eeeefc738dc0b32edb04 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.207 187316 INFO nova.virt.driver [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 24 15:36:30 compute-0 sudo[187771]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.299 187316 INFO nova.compute.provider_config [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.327 187316 DEBUG oslo_concurrency.lockutils [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.327 187316 DEBUG oslo_concurrency.lockutils [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.328 187316 DEBUG oslo_concurrency.lockutils [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.328 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.328 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.328 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.328 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.329 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.329 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.329 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.329 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.329 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.329 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.330 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.330 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.330 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.330 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.330 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.330 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.331 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.331 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.331 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.331 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.331 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.331 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.332 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.332 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.332 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.332 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.332 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.333 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.333 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.333 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.333 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.333 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.333 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.334 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.334 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.334 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.334 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.334 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.335 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.335 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.335 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.335 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.335 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.335 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.336 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.336 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.336 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.336 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.336 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.336 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.337 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.337 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.337 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.337 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.337 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.337 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.338 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.338 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.338 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.338 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.338 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.338 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.339 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.339 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.339 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.339 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.339 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.339 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.340 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.340 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.340 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.340 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.340 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.340 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.341 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.341 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.341 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.341 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.341 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.341 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.342 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.342 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.342 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.342 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.342 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.343 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.343 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.343 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.343 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.343 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.343 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.344 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.344 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.344 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.344 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.344 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.344 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.344 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.345 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.345 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.345 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.345 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.345 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.345 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.346 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.346 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.346 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.346 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.346 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.346 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.347 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.347 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.347 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.347 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.347 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.347 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.348 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.348 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.348 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.348 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.348 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.348 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.349 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.349 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.349 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.349 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.349 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.349 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.350 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.350 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.350 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.350 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.350 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.350 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.351 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.351 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.351 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.351 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.351 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.351 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.352 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.352 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.352 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.352 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.352 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.352 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.353 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.353 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.353 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.353 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.353 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.353 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.354 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.354 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.354 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.354 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.354 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.355 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.355 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.355 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.355 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.355 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.355 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.356 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.356 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.356 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.356 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.356 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.356 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.357 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.357 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.357 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.357 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.357 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.357 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.358 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.358 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.358 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.358 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.358 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.358 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.359 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.359 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.359 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.359 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.359 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.359 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.360 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.360 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.360 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.360 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.360 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.361 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.361 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.361 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.361 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.361 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.361 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.362 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.362 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.362 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.362 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.362 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.362 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.363 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.363 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.363 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.363 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.363 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.363 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.364 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.364 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.364 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.364 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.364 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.364 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.365 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.365 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.365 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.365 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.365 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.365 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.366 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.366 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.366 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.366 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.366 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.366 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.367 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.367 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.367 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.367 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.367 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.367 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.368 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.368 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.368 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.368 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.368 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.369 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.369 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.369 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.369 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.369 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.369 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.370 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.370 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.370 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.370 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.370 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.370 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.371 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.371 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.371 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.371 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.371 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.371 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.372 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.372 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.372 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.372 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.372 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.372 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.373 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.373 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.373 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.373 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.373 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.373 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.374 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.374 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.374 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.374 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.374 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.374 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.375 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.375 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.375 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.375 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.375 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.376 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.376 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.376 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.376 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.376 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.376 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.377 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.377 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.377 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.377 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.377 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.377 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.378 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.378 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.378 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.378 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.378 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.378 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.379 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.379 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.379 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.379 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.379 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.379 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.380 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.380 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.380 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.380 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.380 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.380 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.381 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.381 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.381 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.381 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.381 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.381 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.382 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.382 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.382 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.382 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.382 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.382 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.383 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.383 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.383 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.383 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.383 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.383 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.384 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.384 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.384 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.384 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.384 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.384 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.385 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.385 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.385 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.385 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.385 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.385 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.386 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.386 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.386 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.386 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.386 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.387 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.387 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.387 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.387 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.387 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.387 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.388 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.388 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.388 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.388 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.388 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.389 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.389 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.389 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.389 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.389 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.389 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.390 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.390 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.390 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.390 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.390 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.390 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.391 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.391 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.391 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.391 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.391 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.391 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.392 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.392 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.392 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.392 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.392 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.392 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.393 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.393 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.393 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.393 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.393 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.393 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.394 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.394 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.394 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.394 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.394 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.395 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.395 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.395 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.395 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.395 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.395 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.396 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.396 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.396 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.396 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.396 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.396 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.397 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.397 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.397 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.397 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.397 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.397 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.398 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.398 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.398 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.398 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.398 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.398 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.399 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.399 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.399 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.399 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.399 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.399 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.400 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.400 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.400 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.400 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.400 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.400 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.401 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.401 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.401 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.401 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.401 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.401 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.402 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.402 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.402 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.402 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.402 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.402 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.403 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.403 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.403 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.403 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.403 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.403 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.404 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.404 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.404 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.404 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.404 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.404 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.405 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.405 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.405 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.405 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.405 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.405 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.406 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.406 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.406 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.406 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.406 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.406 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.407 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.407 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.407 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.407 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.407 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.407 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.408 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.408 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.408 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.408 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.408 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.409 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.409 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.409 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.409 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.409 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.409 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.410 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.410 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.410 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.410 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.410 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.410 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.411 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.411 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.411 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.411 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.411 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.412 187316 WARNING oslo_config.cfg [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 24 15:36:30 compute-0 nova_compute[187312]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 24 15:36:30 compute-0 nova_compute[187312]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 24 15:36:30 compute-0 nova_compute[187312]: and ``live_migration_inbound_addr`` respectively.
Feb 24 15:36:30 compute-0 nova_compute[187312]: ).  Its value may be silently ignored in the future.
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.412 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.412 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.412 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.412 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.413 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.413 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.413 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.413 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.413 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.413 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.414 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.414 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.414 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.414 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.414 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.414 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.415 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.415 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.415 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.415 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.415 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.415 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.416 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.416 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.416 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.416 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.416 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.416 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.417 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.417 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.417 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.417 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.417 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.418 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.418 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.418 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.418 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.418 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.418 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.419 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.419 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.419 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.419 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.419 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.419 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.420 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.420 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.420 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.420 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.420 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.421 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.421 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.421 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.421 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.421 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.421 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.422 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.422 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.422 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.422 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.422 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.422 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.423 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.423 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.423 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.423 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.423 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.423 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.424 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.424 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.424 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.424 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.424 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.424 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.425 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.425 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.425 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.425 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.425 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.425 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.426 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.426 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.426 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.426 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.426 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.426 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.427 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.427 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.427 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.427 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.427 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.427 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.428 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.428 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.428 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.428 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.428 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.428 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.429 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.429 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.429 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.429 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.429 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.429 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.430 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.430 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.430 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.430 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.430 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.430 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.431 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.431 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.431 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.431 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.431 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.431 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.432 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.432 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.432 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.432 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.432 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.432 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.433 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.433 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.433 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.433 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.433 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.433 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.434 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.434 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.434 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.434 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.434 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.435 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.435 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.435 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.435 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.435 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.435 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.436 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.436 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.436 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.436 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.436 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.437 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.437 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.437 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.437 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.437 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.437 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.438 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.438 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.438 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.438 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.438 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.438 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.439 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.439 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.439 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.439 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.439 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.439 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.440 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.440 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.440 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.440 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.440 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.440 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.441 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.441 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.441 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.441 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.441 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.441 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.442 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.442 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.442 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.442 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.442 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.443 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.443 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.443 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.443 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.443 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.443 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.444 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.444 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.444 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.444 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.444 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.444 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.445 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.445 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.445 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.445 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.445 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.445 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.446 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.446 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.446 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.446 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.446 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.447 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.447 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.447 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.447 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.447 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.447 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.448 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.448 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.448 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.448 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.448 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.448 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.449 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.449 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.449 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.449 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.449 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.449 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.450 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.450 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.450 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.450 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.450 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.450 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.451 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.451 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.451 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.451 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.451 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.451 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.452 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.452 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.452 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.452 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.452 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.452 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.453 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.453 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.453 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.453 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.453 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.453 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.454 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.454 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.454 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.454 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.454 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.455 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.455 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.455 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.455 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.455 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.456 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.456 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.456 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.456 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.456 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.456 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.457 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.457 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.457 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.457 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.457 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.457 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.458 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.458 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.458 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.458 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.458 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.458 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.459 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.459 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.459 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.459 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.459 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.459 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.460 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.460 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.460 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.460 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.460 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.460 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.461 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.461 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.461 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.461 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.461 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.461 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.462 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.462 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.462 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.462 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.462 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.463 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.463 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.463 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.463 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.463 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.464 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.464 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.464 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.464 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.464 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.464 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.465 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.465 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.465 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.465 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.465 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.465 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.466 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.466 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.466 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.466 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.466 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.466 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.467 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.467 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.467 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.467 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.467 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.467 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.468 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.468 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.468 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.468 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.468 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.468 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.469 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.469 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.469 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.469 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.469 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.470 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.470 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.470 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.470 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.470 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.470 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.471 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.471 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.471 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.471 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.471 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.471 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.472 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.472 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.472 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.472 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.472 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.472 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.473 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.473 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.473 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.473 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.473 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.473 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.474 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.474 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.474 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.474 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.474 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.474 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.475 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.475 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.475 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.475 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.475 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.475 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.476 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.476 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.476 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.476 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.476 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.476 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.477 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.477 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.477 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.477 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.477 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.477 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.478 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.478 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.478 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.478 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.478 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.478 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.479 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.479 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.479 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.479 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.479 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.479 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.480 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.480 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.480 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.480 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.480 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.480 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.481 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.481 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.481 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.481 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.481 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.482 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.482 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.482 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.482 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.482 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.482 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.483 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.483 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.483 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.483 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.483 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.483 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.484 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.484 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.484 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.484 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.484 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.484 187316 DEBUG oslo_service.service [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.485 187316 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.502 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.503 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.503 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.504 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 24 15:36:30 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 24 15:36:30 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.569 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f119d71e100> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.571 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f119d71e100> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.572 187316 INFO nova.virt.libvirt.driver [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Connection event '1' reason 'None'
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.601 187316 WARNING nova.virt.libvirt.driver [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 24 15:36:30 compute-0 nova_compute[187312]: 2026-02-24 15:36:30.602 187316 DEBUG nova.virt.libvirt.volume.mount [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 24 15:36:31 compute-0 python3.9[187976]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.393 187316 INFO nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Libvirt host capabilities <capabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]: 
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <host>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <uuid>dc915849-1080-4855-b939-c41e7d9bcc71</uuid>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <arch>x86_64</arch>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model>EPYC-Rome-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <vendor>AMD</vendor>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <microcode version='16777317'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <signature family='23' model='49' stepping='0'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='x2apic'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='tsc-deadline'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='osxsave'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='hypervisor'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='tsc_adjust'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='spec-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='stibp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='arch-capabilities'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='cmp_legacy'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='topoext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='virt-ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='lbrv'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='tsc-scale'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='vmcb-clean'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='pause-filter'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='pfthreshold'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='svme-addr-chk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='rdctl-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='skip-l1dfl-vmentry'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='mds-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature name='pschange-mc-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <pages unit='KiB' size='4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <pages unit='KiB' size='2048'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <pages unit='KiB' size='1048576'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <power_management>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <suspend_mem/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <suspend_disk/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <suspend_hybrid/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </power_management>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <iommu support='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <migration_features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <live/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <uri_transports>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <uri_transport>tcp</uri_transport>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <uri_transport>rdma</uri_transport>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </uri_transports>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </migration_features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <topology>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <cells num='1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <cell id='0'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:           <memory unit='KiB'>7864276</memory>
Feb 24 15:36:31 compute-0 nova_compute[187312]:           <pages unit='KiB' size='4'>1966069</pages>
Feb 24 15:36:31 compute-0 nova_compute[187312]:           <pages unit='KiB' size='2048'>0</pages>
Feb 24 15:36:31 compute-0 nova_compute[187312]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 24 15:36:31 compute-0 nova_compute[187312]:           <distances>
Feb 24 15:36:31 compute-0 nova_compute[187312]:             <sibling id='0' value='10'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:           </distances>
Feb 24 15:36:31 compute-0 nova_compute[187312]:           <cpus num='8'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:           </cpus>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         </cell>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </cells>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </topology>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <cache>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </cache>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <secmodel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model>selinux</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <doi>0</doi>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </secmodel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <secmodel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model>dac</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <doi>0</doi>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </secmodel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </host>
Feb 24 15:36:31 compute-0 nova_compute[187312]: 
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <guest>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <os_type>hvm</os_type>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <arch name='i686'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <wordsize>32</wordsize>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <domain type='qemu'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <domain type='kvm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </arch>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <pae/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <nonpae/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <acpi default='on' toggle='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <apic default='on' toggle='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <cpuselection/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <deviceboot/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <disksnapshot default='on' toggle='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <externalSnapshot/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </guest>
Feb 24 15:36:31 compute-0 nova_compute[187312]: 
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <guest>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <os_type>hvm</os_type>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <arch name='x86_64'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <wordsize>64</wordsize>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <domain type='qemu'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <domain type='kvm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </arch>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <acpi default='on' toggle='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <apic default='on' toggle='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <cpuselection/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <deviceboot/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <disksnapshot default='on' toggle='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <externalSnapshot/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </guest>
Feb 24 15:36:31 compute-0 nova_compute[187312]: 
Feb 24 15:36:31 compute-0 nova_compute[187312]: </capabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]: 
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.404 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.427 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 24 15:36:31 compute-0 nova_compute[187312]: <domainCapabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <path>/usr/libexec/qemu-kvm</path>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <domain>kvm</domain>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <arch>i686</arch>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <vcpu max='240'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <iothreads supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <os supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <enum name='firmware'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <loader supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>rom</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pflash</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='readonly'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>yes</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>no</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='secure'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>no</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </loader>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </os>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='host-passthrough' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='hostPassthroughMigratable'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>on</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>off</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='maximum' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='maximumMigratable'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>on</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>off</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='host-model' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <vendor>AMD</vendor>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='x2apic'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc-deadline'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='hypervisor'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc_adjust'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='spec-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='stibp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='cmp_legacy'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='overflow-recov'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='succor'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='amd-ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='virt-ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='lbrv'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc-scale'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='vmcb-clean'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='flushbyasid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='pause-filter'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='pfthreshold'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='svme-addr-chk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='disable' name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='custom' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='ClearwaterForest'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ddpd-u'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sha512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='ClearwaterForest-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ddpd-u'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sha512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Dhyana-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Turin'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbpb'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Turin-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbpb'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-128'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-256'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-128'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-256'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v6'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v7'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='KnightsMill'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512er'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512pf'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='KnightsMill-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512er'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512pf'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G4-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tbm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G5-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tbm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='athlon'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='athlon-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='core2duo'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='core2duo-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='coreduo'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='coreduo-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='n270'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='n270-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='phenom'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='phenom-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <memoryBacking supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <enum name='sourceType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>file</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>anonymous</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>memfd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </memoryBacking>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <devices>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <disk supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='diskDevice'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>disk</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>cdrom</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>floppy</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>lun</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='bus'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ide</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>fdc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>scsi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>sata</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-non-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </disk>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <graphics supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vnc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>egl-headless</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dbus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </graphics>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <video supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='modelType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vga</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>cirrus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>none</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>bochs</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ramfb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </video>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <hostdev supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='mode'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>subsystem</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='startupPolicy'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>default</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>mandatory</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>requisite</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>optional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='subsysType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pci</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>scsi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='capsType'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='pciBackend'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </hostdev>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <rng supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-non-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>random</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>egd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>builtin</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </rng>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <filesystem supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='driverType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>path</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>handle</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtiofs</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </filesystem>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <tpm supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tpm-tis</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tpm-crb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>emulator</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>external</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendVersion'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>2.0</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </tpm>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <redirdev supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='bus'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </redirdev>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <channel supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pty</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>unix</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </channel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <crypto supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>qemu</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>builtin</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </crypto>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <interface supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>default</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>passt</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </interface>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <panic supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>isa</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>hyperv</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </panic>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <console supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>null</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pty</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dev</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>file</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pipe</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>stdio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>udp</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tcp</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>unix</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>qemu-vdagent</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dbus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </console>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </devices>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <gic supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <vmcoreinfo supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <genid supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <backingStoreInput supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <backup supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <async-teardown supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <s390-pv supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <ps2 supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <tdx supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <sev supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <sgx supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <hyperv supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='features'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>relaxed</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vapic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>spinlocks</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vpindex</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>runtime</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>synic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>stimer</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>reset</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vendor_id</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>frequencies</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>reenlightenment</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tlbflush</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ipi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>avic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>emsr_bitmap</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>xmm_input</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <defaults>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <spinlocks>4095</spinlocks>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <stimer_direct>on</stimer_direct>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <tlbflush_direct>on</tlbflush_direct>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <tlbflush_extended>on</tlbflush_extended>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </defaults>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </hyperv>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <launchSecurity supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </features>
Feb 24 15:36:31 compute-0 nova_compute[187312]: </domainCapabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.437 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 24 15:36:31 compute-0 nova_compute[187312]: <domainCapabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <path>/usr/libexec/qemu-kvm</path>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <domain>kvm</domain>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <arch>i686</arch>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <vcpu max='4096'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <iothreads supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <os supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <enum name='firmware'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <loader supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>rom</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pflash</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='readonly'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>yes</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>no</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='secure'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>no</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </loader>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </os>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='host-passthrough' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='hostPassthroughMigratable'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>on</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>off</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='maximum' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='maximumMigratable'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>on</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>off</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='host-model' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <vendor>AMD</vendor>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='x2apic'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc-deadline'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='hypervisor'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc_adjust'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='spec-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='stibp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='cmp_legacy'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='overflow-recov'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='succor'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='amd-ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='virt-ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='lbrv'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc-scale'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='vmcb-clean'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='flushbyasid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='pause-filter'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='pfthreshold'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='svme-addr-chk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='disable' name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='custom' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='ClearwaterForest'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ddpd-u'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sha512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='ClearwaterForest-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ddpd-u'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sha512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Dhyana-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Turin'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbpb'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Turin-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbpb'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-128'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-256'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-128'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-256'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v6'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v7'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='KnightsMill'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512er'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512pf'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='KnightsMill-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512er'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512pf'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G4-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tbm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G5-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tbm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='athlon'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='athlon-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='core2duo'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='core2duo-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='coreduo'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='coreduo-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='n270'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='n270-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='phenom'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='phenom-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <memoryBacking supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <enum name='sourceType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>file</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>anonymous</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>memfd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </memoryBacking>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <devices>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <disk supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='diskDevice'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>disk</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>cdrom</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>floppy</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>lun</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='bus'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>fdc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>scsi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>sata</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-non-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </disk>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <graphics supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vnc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>egl-headless</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dbus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </graphics>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <video supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='modelType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vga</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>cirrus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>none</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>bochs</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ramfb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </video>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <hostdev supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='mode'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>subsystem</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='startupPolicy'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>default</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>mandatory</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>requisite</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>optional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='subsysType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pci</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>scsi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='capsType'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='pciBackend'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </hostdev>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <rng supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-non-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>random</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>egd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>builtin</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </rng>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <filesystem supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='driverType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>path</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>handle</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtiofs</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </filesystem>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <tpm supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tpm-tis</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tpm-crb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>emulator</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>external</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendVersion'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>2.0</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </tpm>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <redirdev supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='bus'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </redirdev>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <channel supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pty</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>unix</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </channel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <crypto supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>qemu</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>builtin</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </crypto>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <interface supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>default</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>passt</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </interface>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <panic supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>isa</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>hyperv</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </panic>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <console supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>null</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pty</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dev</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>file</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pipe</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>stdio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>udp</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tcp</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>unix</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>qemu-vdagent</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dbus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </console>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </devices>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <gic supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <vmcoreinfo supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <genid supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <backingStoreInput supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <backup supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <async-teardown supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <s390-pv supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <ps2 supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <tdx supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <sev supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <sgx supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <hyperv supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='features'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>relaxed</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vapic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>spinlocks</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vpindex</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>runtime</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>synic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>stimer</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>reset</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vendor_id</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>frequencies</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>reenlightenment</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tlbflush</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ipi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>avic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>emsr_bitmap</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>xmm_input</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <defaults>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <spinlocks>4095</spinlocks>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <stimer_direct>on</stimer_direct>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <tlbflush_direct>on</tlbflush_direct>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <tlbflush_extended>on</tlbflush_extended>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </defaults>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </hyperv>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <launchSecurity supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </features>
Feb 24 15:36:31 compute-0 nova_compute[187312]: </domainCapabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.507 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.511 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 24 15:36:31 compute-0 nova_compute[187312]: <domainCapabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <path>/usr/libexec/qemu-kvm</path>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <domain>kvm</domain>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <arch>x86_64</arch>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <vcpu max='4096'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <iothreads supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <os supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <enum name='firmware'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>efi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <loader supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>rom</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pflash</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='readonly'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>yes</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>no</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='secure'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>yes</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>no</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </loader>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </os>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='host-passthrough' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='hostPassthroughMigratable'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>on</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>off</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='maximum' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='maximumMigratable'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>on</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>off</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='host-model' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <vendor>AMD</vendor>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='x2apic'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc-deadline'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='hypervisor'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc_adjust'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='spec-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='stibp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='cmp_legacy'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='overflow-recov'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='succor'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='amd-ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='virt-ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='lbrv'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc-scale'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='vmcb-clean'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='flushbyasid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='pause-filter'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='pfthreshold'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='svme-addr-chk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='disable' name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='custom' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='ClearwaterForest'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ddpd-u'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sha512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='ClearwaterForest-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ddpd-u'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sha512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Dhyana-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Turin'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbpb'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Turin-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbpb'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-128'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-256'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-128'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-256'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v6'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v7'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='KnightsMill'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512er'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512pf'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='KnightsMill-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512er'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512pf'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G4-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tbm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G5-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tbm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='athlon'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='athlon-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='core2duo'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='core2duo-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='coreduo'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='coreduo-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='n270'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='n270-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='phenom'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='phenom-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <memoryBacking supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <enum name='sourceType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>file</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>anonymous</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>memfd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </memoryBacking>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <devices>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <disk supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='diskDevice'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>disk</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>cdrom</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>floppy</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>lun</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='bus'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>fdc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>scsi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>sata</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-non-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </disk>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <graphics supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vnc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>egl-headless</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dbus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </graphics>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <video supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='modelType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vga</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>cirrus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>none</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>bochs</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ramfb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </video>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <hostdev supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='mode'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>subsystem</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='startupPolicy'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>default</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>mandatory</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>requisite</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>optional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='subsysType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pci</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>scsi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='capsType'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='pciBackend'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </hostdev>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <rng supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-non-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>random</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>egd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>builtin</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </rng>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <filesystem supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='driverType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>path</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>handle</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtiofs</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </filesystem>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <tpm supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tpm-tis</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tpm-crb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>emulator</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>external</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendVersion'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>2.0</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </tpm>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <redirdev supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='bus'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </redirdev>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <channel supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pty</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>unix</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </channel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <crypto supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>qemu</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>builtin</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </crypto>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <interface supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>default</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>passt</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </interface>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <panic supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>isa</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>hyperv</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </panic>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <console supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>null</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pty</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dev</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>file</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pipe</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>stdio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>udp</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tcp</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>unix</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>qemu-vdagent</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dbus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </console>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </devices>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <gic supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <vmcoreinfo supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <genid supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <backingStoreInput supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <backup supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <async-teardown supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <s390-pv supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <ps2 supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <tdx supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <sev supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <sgx supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <hyperv supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='features'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>relaxed</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vapic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>spinlocks</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vpindex</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>runtime</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>synic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>stimer</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>reset</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vendor_id</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>frequencies</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>reenlightenment</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tlbflush</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ipi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>avic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>emsr_bitmap</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>xmm_input</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <defaults>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <spinlocks>4095</spinlocks>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <stimer_direct>on</stimer_direct>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <tlbflush_direct>on</tlbflush_direct>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <tlbflush_extended>on</tlbflush_extended>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </defaults>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </hyperv>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <launchSecurity supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </features>
Feb 24 15:36:31 compute-0 nova_compute[187312]: </domainCapabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.571 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 24 15:36:31 compute-0 nova_compute[187312]: <domainCapabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <path>/usr/libexec/qemu-kvm</path>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <domain>kvm</domain>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <arch>x86_64</arch>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <vcpu max='240'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <iothreads supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <os supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <enum name='firmware'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <loader supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>rom</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pflash</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='readonly'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>yes</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>no</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='secure'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>no</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </loader>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </os>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='host-passthrough' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='hostPassthroughMigratable'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>on</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>off</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='maximum' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='maximumMigratable'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>on</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>off</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='host-model' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <vendor>AMD</vendor>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='x2apic'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc-deadline'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='hypervisor'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc_adjust'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='spec-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='stibp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='cmp_legacy'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='overflow-recov'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='succor'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='amd-ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='virt-ssbd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='lbrv'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='tsc-scale'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='vmcb-clean'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='flushbyasid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='pause-filter'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='pfthreshold'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='svme-addr-chk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <feature policy='disable' name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <mode name='custom' supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Broadwell-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cascadelake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='ClearwaterForest'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ddpd-u'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sha512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='ClearwaterForest-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ddpd-u'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sha512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm3'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sm4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Cooperlake-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Denverton-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Dhyana-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Genoa-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Milan-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Rome-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Turin'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbpb'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-Turin-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amd-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='auto-ibrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='perfmon-v2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbpb'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='stibp-always-on'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='EPYC-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-128'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-256'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='GraniteRapids-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-128'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-256'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx10-512'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='prefetchiti'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Haswell-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-noTSX'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v6'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Icelake-Server-v7'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='IvyBridge-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='KnightsMill'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512er'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512pf'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='KnightsMill-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512er'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512pf'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G4-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tbm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Opteron_G5-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fma4'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tbm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xop'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SapphireRapids-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='amx-tile'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-bf16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-fp16'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bitalg'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrc'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fzrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='la57'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='taa-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='SierraForest-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ifma'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cmpccxadd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fbsdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='fsrs'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ibrs-all'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='intel-psfd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='lam'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mcdt-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pbrsb-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='psdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='serialize'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vaes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Client-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='hle'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='rtm'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Skylake-Server-v5'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512bw'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512cd'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512dq'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512f'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='avx512vl'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='invpcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pcid'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='pku'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='mpx'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v2'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v3'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='core-capability'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='split-lock-detect'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='Snowridge-v4'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='cldemote'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='erms'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='gfni'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdir64b'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='movdiri'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='xsaves'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='athlon'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='athlon-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='core2duo'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='core2duo-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='coreduo'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='coreduo-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='n270'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='n270-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='ss'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='phenom'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <blockers model='phenom-v1'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnow'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <feature name='3dnowext'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </blockers>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </mode>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </cpu>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <memoryBacking supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <enum name='sourceType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>file</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>anonymous</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <value>memfd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </memoryBacking>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <devices>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <disk supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='diskDevice'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>disk</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>cdrom</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>floppy</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>lun</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='bus'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ide</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>fdc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>scsi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>sata</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-non-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </disk>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <graphics supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vnc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>egl-headless</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dbus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </graphics>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <video supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='modelType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vga</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>cirrus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>none</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>bochs</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ramfb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </video>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <hostdev supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='mode'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>subsystem</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='startupPolicy'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>default</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>mandatory</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>requisite</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>optional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='subsysType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pci</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>scsi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='capsType'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='pciBackend'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </hostdev>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <rng supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtio-non-transitional</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>random</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>egd</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>builtin</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </rng>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <filesystem supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='driverType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>path</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>handle</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>virtiofs</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </filesystem>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <tpm supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tpm-tis</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tpm-crb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>emulator</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>external</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendVersion'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>2.0</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </tpm>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <redirdev supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='bus'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>usb</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </redirdev>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <channel supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pty</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>unix</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </channel>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <crypto supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>qemu</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendModel'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>builtin</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </crypto>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <interface supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='backendType'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>default</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>passt</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </interface>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <panic supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='model'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>isa</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>hyperv</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </panic>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <console supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='type'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>null</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vc</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pty</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dev</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>file</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>pipe</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>stdio</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>udp</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tcp</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>unix</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>qemu-vdagent</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>dbus</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </console>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </devices>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   <features>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <gic supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <vmcoreinfo supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <genid supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <backingStoreInput supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <backup supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <async-teardown supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <s390-pv supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <ps2 supported='yes'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <tdx supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <sev supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <sgx supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <hyperv supported='yes'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <enum name='features'>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>relaxed</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vapic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>spinlocks</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vpindex</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>runtime</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>synic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>stimer</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>reset</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>vendor_id</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>frequencies</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>reenlightenment</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>tlbflush</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>ipi</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>avic</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>emsr_bitmap</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <value>xmm_input</value>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </enum>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       <defaults>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <spinlocks>4095</spinlocks>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <stimer_direct>on</stimer_direct>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <tlbflush_direct>on</tlbflush_direct>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <tlbflush_extended>on</tlbflush_extended>
Feb 24 15:36:31 compute-0 nova_compute[187312]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 24 15:36:31 compute-0 nova_compute[187312]:       </defaults>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     </hyperv>
Feb 24 15:36:31 compute-0 nova_compute[187312]:     <launchSecurity supported='no'/>
Feb 24 15:36:31 compute-0 nova_compute[187312]:   </features>
Feb 24 15:36:31 compute-0 nova_compute[187312]: </domainCapabilities>
Feb 24 15:36:31 compute-0 nova_compute[187312]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.630 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.631 187316 INFO nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Secure Boot support detected
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.633 187316 INFO nova.virt.libvirt.driver [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.633 187316 INFO nova.virt.libvirt.driver [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.644 187316 DEBUG nova.virt.libvirt.driver [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.684 187316 INFO nova.virt.node [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Determined node identity 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from /var/lib/nova/compute_id
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.708 187316 WARNING nova.compute.manager [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Compute nodes ['3c29c547-d990-4bd5-9bfd-810bbeade4e4'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.745 187316 INFO nova.compute.manager [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.781 187316 WARNING nova.compute.manager [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.781 187316 DEBUG oslo_concurrency.lockutils [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.782 187316 DEBUG oslo_concurrency.lockutils [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.782 187316 DEBUG oslo_concurrency.lockutils [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:36:31 compute-0 nova_compute[187312]: 2026-02-24 15:36:31.782 187316 DEBUG nova.compute.resource_tracker [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:36:31 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 24 15:36:31 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 24 15:36:31 compute-0 python3.9[188138]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:36:32 compute-0 nova_compute[187312]: 2026-02-24 15:36:32.094 187316 WARNING nova.virt.libvirt.driver [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:36:32 compute-0 nova_compute[187312]: 2026-02-24 15:36:32.095 187316 DEBUG nova.compute.resource_tracker [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6060MB free_disk=72.49087524414062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:36:32 compute-0 nova_compute[187312]: 2026-02-24 15:36:32.096 187316 DEBUG oslo_concurrency.lockutils [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:36:32 compute-0 nova_compute[187312]: 2026-02-24 15:36:32.096 187316 DEBUG oslo_concurrency.lockutils [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:36:32 compute-0 nova_compute[187312]: 2026-02-24 15:36:32.114 187316 WARNING nova.compute.resource_tracker [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] No compute node record for compute-0.ctlplane.example.com:3c29c547-d990-4bd5-9bfd-810bbeade4e4: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 3c29c547-d990-4bd5-9bfd-810bbeade4e4 could not be found.
Feb 24 15:36:32 compute-0 nova_compute[187312]: 2026-02-24 15:36:32.137 187316 INFO nova.compute.resource_tracker [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 3c29c547-d990-4bd5-9bfd-810bbeade4e4
Feb 24 15:36:32 compute-0 nova_compute[187312]: 2026-02-24 15:36:32.206 187316 DEBUG nova.compute.resource_tracker [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:36:32 compute-0 nova_compute[187312]: 2026-02-24 15:36:32.206 187316 DEBUG nova.compute.resource_tracker [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:36:32 compute-0 python3.9[188311]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:36:33 compute-0 nova_compute[187312]: 2026-02-24 15:36:33.436 187316 INFO nova.scheduler.client.report [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] [req-115c83e7-26d3-40a4-9712-2d04a2280b82] Created resource provider record via placement API for resource provider with UUID 3c29c547-d990-4bd5-9bfd-810bbeade4e4 and name compute-0.ctlplane.example.com.
Feb 24 15:36:33 compute-0 sudo[188461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmewsqeybmcfstwheobwijzynwqyqmag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947393.0620012-1424-107486703653569/AnsiballZ_podman_container.py'
Feb 24 15:36:33 compute-0 sudo[188461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:33 compute-0 python3.9[188464]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 24 15:36:33 compute-0 nova_compute[187312]: 2026-02-24 15:36:33.875 187316 DEBUG nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 24 15:36:33 compute-0 nova_compute[187312]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 24 15:36:33 compute-0 rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:36:33 compute-0 nova_compute[187312]: 2026-02-24 15:36:33.876 187316 INFO nova.virt.libvirt.host [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] kernel doesn't support AMD SEV
Feb 24 15:36:33 compute-0 rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:36:33 compute-0 nova_compute[187312]: 2026-02-24 15:36:33.877 187316 DEBUG nova.compute.provider_tree [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:36:33 compute-0 sudo[188461]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:33 compute-0 nova_compute[187312]: 2026-02-24 15:36:33.877 187316 DEBUG nova.virt.libvirt.driver [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 15:36:33 compute-0 nova_compute[187312]: 2026-02-24 15:36:33.953 187316 DEBUG nova.scheduler.client.report [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Updated inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 24 15:36:33 compute-0 nova_compute[187312]: 2026-02-24 15:36:33.954 187316 DEBUG nova.compute.provider_tree [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Updating resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 24 15:36:33 compute-0 nova_compute[187312]: 2026-02-24 15:36:33.954 187316 DEBUG nova.compute.provider_tree [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:36:34 compute-0 nova_compute[187312]: 2026-02-24 15:36:34.087 187316 DEBUG nova.compute.provider_tree [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Updating resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 24 15:36:34 compute-0 nova_compute[187312]: 2026-02-24 15:36:34.130 187316 DEBUG nova.compute.resource_tracker [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:36:34 compute-0 nova_compute[187312]: 2026-02-24 15:36:34.131 187316 DEBUG oslo_concurrency.lockutils [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:36:34 compute-0 nova_compute[187312]: 2026-02-24 15:36:34.131 187316 DEBUG nova.service [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 24 15:36:34 compute-0 nova_compute[187312]: 2026-02-24 15:36:34.230 187316 DEBUG nova.service [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 24 15:36:34 compute-0 nova_compute[187312]: 2026-02-24 15:36:34.231 187316 DEBUG nova.servicegroup.drivers.db [None req-999e3ddf-e656-4af3-84d4-cd71c8c41cf9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 24 15:36:34 compute-0 sudo[188638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tunagvaraxxlcnpocwptuekxvunzqnqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947394.1318927-1432-46825848653872/AnsiballZ_systemd.py'
Feb 24 15:36:34 compute-0 sudo[188638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:34 compute-0 python3.9[188641]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:36:34 compute-0 systemd[1]: Stopping nova_compute container...
Feb 24 15:36:35 compute-0 nova_compute[187312]: 2026-02-24 15:36:35.278 187316 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Feb 24 15:36:35 compute-0 nova_compute[187312]: 2026-02-24 15:36:35.281 187316 DEBUG oslo_concurrency.lockutils [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:36:35 compute-0 nova_compute[187312]: 2026-02-24 15:36:35.281 187316 DEBUG oslo_concurrency.lockutils [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:36:35 compute-0 nova_compute[187312]: 2026-02-24 15:36:35.282 187316 DEBUG oslo_concurrency.lockutils [None req-9d432b34-855d-474d-8e97-62d6e1caca2f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:36:35 compute-0 virtqemud[187820]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 24 15:36:35 compute-0 virtqemud[187820]: hostname: compute-0
Feb 24 15:36:35 compute-0 virtqemud[187820]: End of file while reading data: Input/output error
Feb 24 15:36:35 compute-0 systemd[1]: libpod-6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0.scope: Deactivated successfully.
Feb 24 15:36:35 compute-0 systemd[1]: libpod-6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0.scope: Consumed 2.913s CPU time.
Feb 24 15:36:35 compute-0 podman[188645]: 2026-02-24 15:36:35.660701758 +0000 UTC m=+0.841146981 container stop 6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 15:36:35 compute-0 podman[188645]: 2026-02-24 15:36:35.690208657 +0000 UTC m=+0.870653890 container died 6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445-merged.mount: Deactivated successfully.
Feb 24 15:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0-userdata-shm.mount: Deactivated successfully.
Feb 24 15:36:35 compute-0 podman[188645]: 2026-02-24 15:36:35.741579055 +0000 UTC m=+0.922024298 container cleanup 6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 15:36:35 compute-0 podman[188645]: nova_compute
Feb 24 15:36:35 compute-0 podman[188675]: nova_compute
Feb 24 15:36:35 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 24 15:36:35 compute-0 systemd[1]: Stopped nova_compute container.
Feb 24 15:36:35 compute-0 systemd[1]: Starting nova_compute container...
Feb 24 15:36:35 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3579b85e3276029b9900cd3278a2a2f736f274c29a047c2dbaa0b3a4a7f73445/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:35 compute-0 podman[188688]: 2026-02-24 15:36:35.975109494 +0000 UTC m=+0.125959562 container init 6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute)
Feb 24 15:36:35 compute-0 podman[188688]: 2026-02-24 15:36:35.990213147 +0000 UTC m=+0.141063135 container start 6d684ab71e30a9dd6a76b1ad6c3d2f4bdfdab72aeaefef27d1f208801d6c11b0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 24 15:36:35 compute-0 podman[188688]: nova_compute
Feb 24 15:36:35 compute-0 nova_compute[188703]: + sudo -E kolla_set_configs
Feb 24 15:36:35 compute-0 systemd[1]: Started nova_compute container.
Feb 24 15:36:36 compute-0 sudo[188638]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Validating config file
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Copying service configuration files
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Deleting /etc/ceph
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Creating directory /etc/ceph
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /etc/ceph
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Writing out command to execute
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 24 15:36:36 compute-0 nova_compute[188703]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 24 15:36:36 compute-0 nova_compute[188703]: ++ cat /run_command
Feb 24 15:36:36 compute-0 nova_compute[188703]: + CMD=nova-compute
Feb 24 15:36:36 compute-0 nova_compute[188703]: + ARGS=
Feb 24 15:36:36 compute-0 nova_compute[188703]: + sudo kolla_copy_cacerts
Feb 24 15:36:36 compute-0 nova_compute[188703]: + [[ ! -n '' ]]
Feb 24 15:36:36 compute-0 nova_compute[188703]: + . kolla_extend_start
Feb 24 15:36:36 compute-0 nova_compute[188703]: Running command: 'nova-compute'
Feb 24 15:36:36 compute-0 nova_compute[188703]: + echo 'Running command: '\''nova-compute'\'''
Feb 24 15:36:36 compute-0 nova_compute[188703]: + umask 0022
Feb 24 15:36:36 compute-0 nova_compute[188703]: + exec nova-compute
Feb 24 15:36:36 compute-0 sudo[188864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjtioeuqryaiodymcldsihtypqxqbvjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947396.5039942-1441-215010389506058/AnsiballZ_podman_container.py'
Feb 24 15:36:36 compute-0 sudo[188864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:37 compute-0 python3.9[188868]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 24 15:36:37 compute-0 podman[188870]: 2026-02-24 15:36:37.153635679 +0000 UTC m=+0.103758805 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 24 15:36:37 compute-0 systemd[1]: Started libpod-conmon-64d069e26960e8ce722f43e4b4daf74401a70d2648d3b479d6fc45a074e065c6.scope.
Feb 24 15:36:37 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d143500903d49520323a4b606a18971549fce7814be6948ea1cc3342a2f11a/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d143500903d49520323a4b606a18971549fce7814be6948ea1cc3342a2f11a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d143500903d49520323a4b606a18971549fce7814be6948ea1cc3342a2f11a/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 24 15:36:37 compute-0 podman[188906]: 2026-02-24 15:36:37.297534182 +0000 UTC m=+0.171521651 container init 64d069e26960e8ce722f43e4b4daf74401a70d2648d3b479d6fc45a074e065c6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=nova_compute_init, container_name=nova_compute_init, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:36:37 compute-0 podman[188906]: 2026-02-24 15:36:37.30546007 +0000 UTC m=+0.179447509 container start 64d069e26960e8ce722f43e4b4daf74401a70d2648d3b479d6fc45a074e065c6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Feb 24 15:36:37 compute-0 python3.9[188868]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Applying nova statedir ownership
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 24 15:36:37 compute-0 nova_compute_init[188935]: INFO:nova_statedir:Nova statedir ownership complete
Feb 24 15:36:37 compute-0 systemd[1]: libpod-64d069e26960e8ce722f43e4b4daf74401a70d2648d3b479d6fc45a074e065c6.scope: Deactivated successfully.
Feb 24 15:36:37 compute-0 podman[188950]: 2026-02-24 15:36:37.428577373 +0000 UTC m=+0.039946966 container died 64d069e26960e8ce722f43e4b4daf74401a70d2648d3b479d6fc45a074e065c6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.43.0, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:36:37 compute-0 sudo[188864]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64d069e26960e8ce722f43e4b4daf74401a70d2648d3b479d6fc45a074e065c6-userdata-shm.mount: Deactivated successfully.
Feb 24 15:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-57d143500903d49520323a4b606a18971549fce7814be6948ea1cc3342a2f11a-merged.mount: Deactivated successfully.
Feb 24 15:36:37 compute-0 podman[188950]: 2026-02-24 15:36:37.473955379 +0000 UTC m=+0.085324972 container cleanup 64d069e26960e8ce722f43e4b4daf74401a70d2648d3b479d6fc45a074e065c6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=nova_compute_init, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'e7e3cbaf325aec9605e3b278876c2fbe0bbfb185d88f3890dcc1beb81fc158fe'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init)
Feb 24 15:36:37 compute-0 systemd[1]: libpod-conmon-64d069e26960e8ce722f43e4b4daf74401a70d2648d3b479d6fc45a074e065c6.scope: Deactivated successfully.
Feb 24 15:36:37 compute-0 nova_compute[188703]: 2026-02-24 15:36:37.791 188707 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 24 15:36:37 compute-0 nova_compute[188703]: 2026-02-24 15:36:37.792 188707 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 24 15:36:37 compute-0 nova_compute[188703]: 2026-02-24 15:36:37.792 188707 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 24 15:36:37 compute-0 nova_compute[188703]: 2026-02-24 15:36:37.792 188707 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 24 15:36:37 compute-0 sshd-session[163709]: Connection closed by 192.168.122.30 port 36340
Feb 24 15:36:37 compute-0 nova_compute[188703]: 2026-02-24 15:36:37.911 188707 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:36:37 compute-0 sshd-session[163706]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:36:37 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Feb 24 15:36:37 compute-0 systemd[1]: session-23.scope: Consumed 1min 38.760s CPU time.
Feb 24 15:36:37 compute-0 systemd-logind[813]: Session 23 logged out. Waiting for processes to exit.
Feb 24 15:36:37 compute-0 systemd-logind[813]: Removed session 23.
Feb 24 15:36:37 compute-0 nova_compute[188703]: 2026-02-24 15:36:37.933 188707 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:36:37 compute-0 nova_compute[188703]: 2026-02-24 15:36:37.934 188707 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.416 188707 INFO nova.virt.driver [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.532 188707 INFO nova.compute.provider_config [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.551 188707 DEBUG oslo_concurrency.lockutils [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.551 188707 DEBUG oslo_concurrency.lockutils [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.552 188707 DEBUG oslo_concurrency.lockutils [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.552 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.552 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.552 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.552 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.553 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.553 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.553 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.553 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.553 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.554 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.554 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.554 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.554 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.554 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.555 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.555 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.555 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.555 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.555 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.555 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.556 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.556 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.556 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.556 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.556 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.557 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.557 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.557 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.557 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.557 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.558 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.558 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.558 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.558 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.558 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.559 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.559 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.559 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.559 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.559 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.560 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.560 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.560 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.560 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.560 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.561 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.561 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.561 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.561 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.561 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.562 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.562 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.562 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.562 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.562 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.563 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.563 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.563 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.563 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.563 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.563 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.564 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.564 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.564 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.564 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.564 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.564 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.565 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.565 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.565 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.565 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.565 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.566 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.566 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.566 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.566 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.566 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.567 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.567 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.567 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.567 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.567 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.567 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.568 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.568 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.568 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.568 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.568 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.569 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.569 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.569 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.569 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.569 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.570 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.570 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.570 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.570 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.570 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.570 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.571 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.571 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.571 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.571 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.571 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.572 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.572 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.572 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.572 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.572 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.573 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.573 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.573 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.573 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.573 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.574 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.574 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.574 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.574 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.574 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.575 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.575 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.575 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.575 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.575 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.576 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.576 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.576 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.576 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.576 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.576 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.577 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.577 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.577 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.577 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.577 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.578 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.578 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.578 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.578 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.578 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.579 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.579 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.579 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.579 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.579 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.580 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.580 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.580 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.580 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.580 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.581 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.581 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.581 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.581 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.581 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.582 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.582 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.582 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.582 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.582 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.583 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.583 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.583 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.583 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.583 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.584 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.584 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.584 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.584 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.584 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.585 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.585 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.585 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.585 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.585 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.586 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.586 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.586 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.586 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.586 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.587 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.587 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.587 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.587 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.587 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.588 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.588 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.588 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.588 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.588 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.589 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.589 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.589 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.589 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.589 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.589 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.590 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.590 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.590 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.590 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.590 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.591 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.591 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.591 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.591 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.591 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.591 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.591 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.591 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.592 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.592 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.592 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.592 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.592 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.592 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.592 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.593 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.593 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.593 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.593 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.593 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.593 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.593 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.593 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.594 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.594 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.594 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.594 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.594 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.594 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.594 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.594 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.595 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.595 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.595 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.595 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.595 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.595 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.595 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.595 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.596 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.596 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.596 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.596 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.596 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.596 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.596 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.597 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.597 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.597 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.597 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.597 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.597 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.597 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.597 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.598 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.598 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.598 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.598 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.598 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.598 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.598 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.598 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.599 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.599 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.599 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.599 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.599 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.599 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.599 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.600 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.600 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.600 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.600 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.600 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.600 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.600 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.600 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.601 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.601 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.601 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.601 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.601 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.601 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.601 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.602 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.602 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.602 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.602 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.602 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.602 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.602 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.602 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.603 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.603 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.603 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.603 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.603 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.603 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.603 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.603 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.604 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.604 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.604 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.604 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.604 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.604 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.604 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.605 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.605 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.605 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.605 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.605 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.605 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.605 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.605 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.606 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.606 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.606 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.606 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.606 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.606 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.606 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.606 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.607 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.607 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.607 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.607 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.607 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.607 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.607 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.608 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.608 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.608 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.608 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.608 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.608 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.608 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.608 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.609 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.609 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.609 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.609 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.609 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.609 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.609 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.610 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.610 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.610 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.610 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.610 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.610 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.610 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.610 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.611 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.611 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.611 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.611 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.611 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.611 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.612 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.612 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.612 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.612 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.612 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.612 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.612 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.613 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.613 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.613 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.613 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.613 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.613 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.613 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.614 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.614 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.614 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.614 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.614 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.614 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.614 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.614 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.615 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.615 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.615 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.615 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.615 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.615 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.615 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.616 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.616 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.616 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.616 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.616 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.616 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.616 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.617 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.617 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.617 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.617 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.617 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.617 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.617 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.618 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.618 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.618 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.618 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.618 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.618 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.618 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.618 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.619 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.619 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.619 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.619 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.619 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.619 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.619 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.620 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.620 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.620 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.620 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.620 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.620 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.620 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.621 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.621 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.621 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.621 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.621 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.621 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.621 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.622 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.622 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.622 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.622 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.622 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.622 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.622 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.623 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.623 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.623 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.623 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.623 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.623 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.623 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.623 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.624 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.624 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.624 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.624 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.624 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.624 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.624 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.625 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.625 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.625 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.625 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.625 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.625 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.625 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.626 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.626 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.626 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.626 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.626 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.626 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.626 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.626 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.627 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.627 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.627 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.627 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.627 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.627 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.627 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.628 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.628 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.628 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.628 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.628 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.628 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.628 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.629 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.629 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.629 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.629 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.629 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.629 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.629 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.630 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.630 188707 WARNING oslo_config.cfg [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 24 15:36:38 compute-0 nova_compute[188703]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 24 15:36:38 compute-0 nova_compute[188703]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 24 15:36:38 compute-0 nova_compute[188703]: and ``live_migration_inbound_addr`` respectively.
Feb 24 15:36:38 compute-0 nova_compute[188703]: ).  Its value may be silently ignored in the future.
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.630 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.630 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.630 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.630 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.631 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.631 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.631 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.631 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.631 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.631 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.631 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.632 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.632 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.632 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.632 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.632 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.632 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.632 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.633 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.633 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.633 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.633 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.633 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.633 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.633 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.634 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.634 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.634 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.634 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.634 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.634 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.634 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.635 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.635 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.635 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.635 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.635 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.635 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.635 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.636 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.636 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.636 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.636 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.636 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.636 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.636 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.637 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.637 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.637 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.637 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.637 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.637 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.637 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.638 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.638 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.638 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.638 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.638 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.638 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.638 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.638 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.639 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.639 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.639 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.639 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.639 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.639 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.639 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.639 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.640 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.640 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.640 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.640 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.640 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.640 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.640 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.641 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.641 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.641 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.641 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.641 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.641 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.641 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.642 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.642 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.642 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.642 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.642 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.642 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.642 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.643 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.643 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.643 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.643 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.643 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.643 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.643 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.643 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.644 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.644 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.644 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.644 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.644 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.644 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.644 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.645 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.645 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.645 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.645 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.645 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.645 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.645 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.646 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.646 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.646 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.646 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.646 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.646 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.646 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.646 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.647 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.647 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.647 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.647 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.647 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.647 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.647 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.648 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.648 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.648 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.648 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.648 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.648 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.648 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.649 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.649 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.649 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.649 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.649 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.649 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.649 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.650 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.650 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.650 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.650 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.650 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.650 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.650 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.650 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.651 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.651 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.651 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.651 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.651 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.651 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.651 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.652 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.652 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.652 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.652 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.652 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.652 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.652 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.652 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.653 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.653 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.653 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.653 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.653 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.653 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.653 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.654 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.654 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.654 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.654 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.654 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.654 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.654 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.654 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.655 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.655 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.655 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.655 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.655 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.655 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.655 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.656 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.656 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.656 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.656 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.656 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.656 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.656 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.657 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.657 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.657 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.657 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.657 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.657 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.657 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.658 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.658 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.658 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.658 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.658 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.658 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.658 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.658 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.659 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.659 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.659 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.659 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.659 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.659 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.659 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.659 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.660 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.660 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.660 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.660 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.660 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.660 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.660 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.661 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.661 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.661 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.661 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.661 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.661 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.661 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.661 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.662 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.662 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.662 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.662 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.662 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.662 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.662 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.663 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.663 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.663 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.663 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.663 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.663 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.663 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.663 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.664 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.664 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.664 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.664 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.664 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.664 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.665 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.665 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.665 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.665 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.665 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.665 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.665 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.666 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.666 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.666 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.666 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.666 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.666 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.666 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.667 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.667 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.667 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.668 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.668 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.668 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.668 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.668 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.668 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.668 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.669 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.669 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.669 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.669 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.669 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.669 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.670 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.670 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.670 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.670 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.670 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.670 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.670 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.671 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.671 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.671 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.671 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.671 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.671 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.672 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.672 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.672 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.672 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.672 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.672 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.672 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.673 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.673 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.673 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.673 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.673 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.673 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.674 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.674 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.674 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.674 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.674 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.674 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.674 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.675 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.675 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.675 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.675 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.675 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.675 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.675 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.675 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.676 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.676 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.676 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.676 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.676 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.677 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.677 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.677 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.677 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.677 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.678 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.678 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.678 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.678 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.679 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.679 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.679 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.679 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.679 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.680 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.680 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.680 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.680 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.680 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.680 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.681 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.681 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.681 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.681 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.681 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.682 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.682 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.682 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.682 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.682 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.682 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.682 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.682 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.683 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.683 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.683 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.683 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.683 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.683 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.684 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.684 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.684 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.684 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.684 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.684 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.685 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.685 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.685 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.685 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.685 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.685 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.685 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.686 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.686 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.686 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.686 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.686 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.686 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.687 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.687 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.687 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.687 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.687 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.687 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.687 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.688 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.688 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.688 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.688 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.688 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.688 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.689 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.689 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.689 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.689 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.689 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.690 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.690 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.690 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.690 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.690 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.690 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.691 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.691 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.691 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.691 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.691 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.691 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.691 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.692 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.692 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.692 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.692 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.692 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.692 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.692 188707 DEBUG oslo_service.service [None req-f1c00ff6-5a0f-4976-9b5f-d65940d8c034 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.693 188707 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.714 188707 INFO nova.virt.node [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Determined node identity 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from /var/lib/nova/compute_id
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.714 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.715 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.715 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.715 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.727 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6484ea0670> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.729 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6484ea0670> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.738 188707 INFO nova.virt.libvirt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Connection event '1' reason 'None'
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.745 188707 INFO nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Libvirt host capabilities <capabilities>
Feb 24 15:36:38 compute-0 nova_compute[188703]: 
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <host>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <uuid>dc915849-1080-4855-b939-c41e7d9bcc71</uuid>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <cpu>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <arch>x86_64</arch>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model>EPYC-Rome-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <vendor>AMD</vendor>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <microcode version='16777317'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <signature family='23' model='49' stepping='0'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='x2apic'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='tsc-deadline'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='osxsave'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='hypervisor'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='tsc_adjust'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='spec-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='stibp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='arch-capabilities'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='cmp_legacy'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='topoext'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='virt-ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='lbrv'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='tsc-scale'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='vmcb-clean'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='pause-filter'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='pfthreshold'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='svme-addr-chk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='rdctl-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='skip-l1dfl-vmentry'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='mds-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature name='pschange-mc-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <pages unit='KiB' size='4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <pages unit='KiB' size='2048'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <pages unit='KiB' size='1048576'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </cpu>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <power_management>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <suspend_mem/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <suspend_disk/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <suspend_hybrid/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </power_management>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <iommu support='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <migration_features>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <live/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <uri_transports>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <uri_transport>tcp</uri_transport>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <uri_transport>rdma</uri_transport>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </uri_transports>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </migration_features>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <topology>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <cells num='1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <cell id='0'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:           <memory unit='KiB'>7864276</memory>
Feb 24 15:36:38 compute-0 nova_compute[188703]:           <pages unit='KiB' size='4'>1966069</pages>
Feb 24 15:36:38 compute-0 nova_compute[188703]:           <pages unit='KiB' size='2048'>0</pages>
Feb 24 15:36:38 compute-0 nova_compute[188703]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 24 15:36:38 compute-0 nova_compute[188703]:           <distances>
Feb 24 15:36:38 compute-0 nova_compute[188703]:             <sibling id='0' value='10'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:           </distances>
Feb 24 15:36:38 compute-0 nova_compute[188703]:           <cpus num='8'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:           </cpus>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         </cell>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </cells>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </topology>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <cache>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </cache>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <secmodel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model>selinux</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <doi>0</doi>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </secmodel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <secmodel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model>dac</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <doi>0</doi>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </secmodel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </host>
Feb 24 15:36:38 compute-0 nova_compute[188703]: 
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <guest>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <os_type>hvm</os_type>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <arch name='i686'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <wordsize>32</wordsize>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <domain type='qemu'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <domain type='kvm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </arch>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <features>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <pae/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <nonpae/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <acpi default='on' toggle='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <apic default='on' toggle='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <cpuselection/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <deviceboot/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <disksnapshot default='on' toggle='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <externalSnapshot/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </features>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </guest>
Feb 24 15:36:38 compute-0 nova_compute[188703]: 
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <guest>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <os_type>hvm</os_type>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <arch name='x86_64'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <wordsize>64</wordsize>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <domain type='qemu'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <domain type='kvm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </arch>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <features>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <acpi default='on' toggle='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <apic default='on' toggle='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <cpuselection/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <deviceboot/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <disksnapshot default='on' toggle='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <externalSnapshot/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </features>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </guest>
Feb 24 15:36:38 compute-0 nova_compute[188703]: 
Feb 24 15:36:38 compute-0 nova_compute[188703]: </capabilities>
Feb 24 15:36:38 compute-0 nova_compute[188703]: 
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.753 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.757 188707 DEBUG nova.virt.libvirt.volume.mount [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.758 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 24 15:36:38 compute-0 nova_compute[188703]: <domainCapabilities>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <path>/usr/libexec/qemu-kvm</path>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <domain>kvm</domain>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <arch>i686</arch>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <vcpu max='4096'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <iothreads supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <os supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <enum name='firmware'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <loader supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>rom</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pflash</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='readonly'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>yes</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>no</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='secure'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>no</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </loader>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </os>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <cpu>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='host-passthrough' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='hostPassthroughMigratable'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>on</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>off</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='maximum' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='maximumMigratable'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>on</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>off</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='host-model' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <vendor>AMD</vendor>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='x2apic'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc-deadline'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='hypervisor'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc_adjust'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='spec-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='stibp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='cmp_legacy'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='overflow-recov'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='succor'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='amd-ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='virt-ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='lbrv'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc-scale'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='vmcb-clean'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='flushbyasid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='pause-filter'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='pfthreshold'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='svme-addr-chk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='disable' name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='custom' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='ClearwaterForest'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ddpd-u'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sha512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm3'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='ClearwaterForest-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ddpd-u'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sha512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm3'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cooperlake'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cooperlake-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cooperlake-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Dhyana-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Turin'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbpb'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Turin-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbpb'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-128'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-256'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-128'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-256'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v6'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v7'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='KnightsMill'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512er'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512pf'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='KnightsMill-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512er'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512pf'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G4-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tbm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G5-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tbm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='athlon'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='athlon-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='core2duo'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='core2duo-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='coreduo'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='coreduo-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='n270'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='n270-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='phenom'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='phenom-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </cpu>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <memoryBacking supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <enum name='sourceType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>file</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>anonymous</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>memfd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </memoryBacking>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <devices>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <disk supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='diskDevice'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>disk</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>cdrom</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>floppy</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>lun</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='bus'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>fdc</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>scsi</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>sata</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio-transitional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio-non-transitional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <graphics supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vnc</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>egl-headless</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>dbus</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </graphics>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <video supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='modelType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vga</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>cirrus</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>none</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>bochs</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>ramfb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </video>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <hostdev supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='mode'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>subsystem</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='startupPolicy'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>default</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>mandatory</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>requisite</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>optional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='subsysType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pci</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>scsi</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='capsType'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='pciBackend'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </hostdev>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <rng supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio-transitional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio-non-transitional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>random</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>egd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>builtin</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </rng>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <filesystem supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='driverType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>path</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>handle</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtiofs</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </filesystem>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <tpm supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>tpm-tis</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>tpm-crb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>emulator</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>external</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendVersion'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>2.0</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </tpm>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <redirdev supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='bus'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </redirdev>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <channel supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pty</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>unix</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </channel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <crypto supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>qemu</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>builtin</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </crypto>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <interface supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>default</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>passt</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </interface>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <panic supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>isa</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>hyperv</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </panic>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <console supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>null</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vc</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pty</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>dev</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>file</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pipe</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>stdio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>udp</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>tcp</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>unix</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>qemu-vdagent</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>dbus</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </console>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </devices>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <features>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <gic supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <vmcoreinfo supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <genid supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <backingStoreInput supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <backup supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <async-teardown supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <s390-pv supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <ps2 supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <tdx supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <sev supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <sgx supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <hyperv supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='features'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>relaxed</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vapic</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>spinlocks</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vpindex</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>runtime</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>synic</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>stimer</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>reset</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vendor_id</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>frequencies</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>reenlightenment</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>tlbflush</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>ipi</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>avic</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>emsr_bitmap</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>xmm_input</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <defaults>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <spinlocks>4095</spinlocks>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <stimer_direct>on</stimer_direct>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <tlbflush_direct>on</tlbflush_direct>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <tlbflush_extended>on</tlbflush_extended>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </defaults>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </hyperv>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <launchSecurity supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </features>
Feb 24 15:36:38 compute-0 nova_compute[188703]: </domainCapabilities>
Feb 24 15:36:38 compute-0 nova_compute[188703]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.768 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 24 15:36:38 compute-0 nova_compute[188703]: <domainCapabilities>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <path>/usr/libexec/qemu-kvm</path>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <domain>kvm</domain>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <arch>i686</arch>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <vcpu max='240'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <iothreads supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <os supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <enum name='firmware'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <loader supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>rom</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pflash</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='readonly'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>yes</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>no</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='secure'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>no</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </loader>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </os>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <cpu>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='host-passthrough' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='hostPassthroughMigratable'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>on</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>off</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='maximum' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='maximumMigratable'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>on</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>off</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='host-model' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <vendor>AMD</vendor>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='x2apic'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc-deadline'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='hypervisor'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc_adjust'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='spec-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='stibp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='cmp_legacy'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='overflow-recov'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='succor'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='amd-ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='virt-ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='lbrv'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc-scale'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='vmcb-clean'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='flushbyasid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='pause-filter'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='pfthreshold'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='svme-addr-chk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='disable' name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='custom' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='ClearwaterForest'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ddpd-u'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sha512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm3'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='ClearwaterForest-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ddpd-u'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sha512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm3'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cooperlake'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cooperlake-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cooperlake-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Dhyana-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Turin'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbpb'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Turin-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbpb'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-128'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-256'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-128'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-256'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v6'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v7'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='KnightsMill'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512er'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512pf'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='KnightsMill-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512er'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512pf'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G4-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tbm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G5-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tbm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='athlon'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='athlon-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='core2duo'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='core2duo-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='coreduo'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='coreduo-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='n270'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='n270-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='phenom'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='phenom-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </cpu>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <memoryBacking supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <enum name='sourceType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>file</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>anonymous</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>memfd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </memoryBacking>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <devices>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <disk supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='diskDevice'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>disk</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>cdrom</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>floppy</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>lun</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='bus'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>ide</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>fdc</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>scsi</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>sata</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio-transitional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio-non-transitional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <graphics supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vnc</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>egl-headless</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>dbus</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </graphics>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <video supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='modelType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vga</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>cirrus</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>none</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>bochs</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>ramfb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </video>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <hostdev supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='mode'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>subsystem</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='startupPolicy'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>default</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>mandatory</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>requisite</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>optional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='subsysType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pci</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>scsi</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='capsType'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='pciBackend'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </hostdev>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <rng supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio-transitional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtio-non-transitional</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>random</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>egd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>builtin</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </rng>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <filesystem supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='driverType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>path</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>handle</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>virtiofs</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </filesystem>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <tpm supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>tpm-tis</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>tpm-crb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>emulator</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>external</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendVersion'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>2.0</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </tpm>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <redirdev supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='bus'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </redirdev>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <channel supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pty</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>unix</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </channel>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <crypto supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>qemu</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>builtin</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </crypto>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <interface supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='backendType'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>default</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>passt</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </interface>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <panic supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>isa</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>hyperv</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </panic>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <console supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>null</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vc</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pty</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>dev</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>file</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pipe</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>stdio</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>udp</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>tcp</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>unix</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>qemu-vdagent</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>dbus</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </console>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </devices>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <features>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <gic supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <vmcoreinfo supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <genid supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <backingStoreInput supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <backup supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <async-teardown supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <s390-pv supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <ps2 supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <tdx supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <sev supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <sgx supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <hyperv supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='features'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>relaxed</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vapic</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>spinlocks</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vpindex</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>runtime</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>synic</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>stimer</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>reset</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>vendor_id</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>frequencies</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>reenlightenment</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>tlbflush</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>ipi</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>avic</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>emsr_bitmap</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>xmm_input</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <defaults>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <spinlocks>4095</spinlocks>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <stimer_direct>on</stimer_direct>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <tlbflush_direct>on</tlbflush_direct>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <tlbflush_extended>on</tlbflush_extended>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </defaults>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </hyperv>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <launchSecurity supported='no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </features>
Feb 24 15:36:38 compute-0 nova_compute[188703]: </domainCapabilities>
Feb 24 15:36:38 compute-0 nova_compute[188703]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.877 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 24 15:36:38 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.883 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 24 15:36:38 compute-0 nova_compute[188703]: <domainCapabilities>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <path>/usr/libexec/qemu-kvm</path>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <domain>kvm</domain>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <arch>x86_64</arch>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <vcpu max='4096'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <iothreads supported='yes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <os supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <enum name='firmware'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>efi</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <loader supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>rom</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>pflash</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='readonly'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>yes</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>no</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='secure'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>yes</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>no</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </loader>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   </os>
Feb 24 15:36:38 compute-0 nova_compute[188703]:   <cpu>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='host-passthrough' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='hostPassthroughMigratable'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>on</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>off</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='maximum' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <enum name='maximumMigratable'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>on</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <value>off</value>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='host-model' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <vendor>AMD</vendor>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='x2apic'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc-deadline'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='hypervisor'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc_adjust'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='spec-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='stibp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='cmp_legacy'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='overflow-recov'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='succor'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='amd-ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='virt-ssbd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='lbrv'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc-scale'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='vmcb-clean'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='flushbyasid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='pause-filter'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='pfthreshold'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='svme-addr-chk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <feature policy='disable' name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:38 compute-0 nova_compute[188703]:     <mode name='custom' supported='yes'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='ClearwaterForest'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ddpd-u'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sha512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm3'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='ClearwaterForest-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ddpd-u'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sha512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm3'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sm4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cooperlake'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cooperlake-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Cooperlake-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Denverton-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Dhyana-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Turin'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbpb'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-Turin-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbpb'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='EPYC-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-128'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-256'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-128'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-256'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx10-512'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Haswell-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-noTSX'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v6'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v7'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-IBRS'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='KnightsMill'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512er'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512pf'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='KnightsMill-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512er'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512pf'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G4-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G5'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tbm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Opteron_G5-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tbm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v4'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v1'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v2'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v3'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 24 15:36:38 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client'>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:38 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v5'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='athlon'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='athlon-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='core2duo'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='core2duo-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='coreduo'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='coreduo-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='n270'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='n270-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='phenom'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='phenom-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   </cpu>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <memoryBacking supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <enum name='sourceType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <value>file</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <value>anonymous</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <value>memfd</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   </memoryBacking>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <devices>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <disk supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='diskDevice'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>disk</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>cdrom</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>floppy</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>lun</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='bus'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>fdc</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>scsi</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>sata</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio-transitional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio-non-transitional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <graphics supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vnc</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>egl-headless</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>dbus</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </graphics>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <video supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='modelType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vga</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>cirrus</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>none</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>bochs</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>ramfb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </video>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <hostdev supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='mode'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>subsystem</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='startupPolicy'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>default</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>mandatory</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>requisite</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>optional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='subsysType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>pci</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>scsi</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='capsType'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='pciBackend'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </hostdev>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <rng supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio-transitional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio-non-transitional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>random</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>egd</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>builtin</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </rng>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <filesystem supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='driverType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>path</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>handle</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtiofs</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </filesystem>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <tpm supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>tpm-tis</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>tpm-crb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>emulator</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>external</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendVersion'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>2.0</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </tpm>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <redirdev supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='bus'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </redirdev>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <channel supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>pty</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>unix</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </channel>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <crypto supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>qemu</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>builtin</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </crypto>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <interface supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>default</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>passt</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </interface>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <panic supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>isa</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>hyperv</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </panic>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <console supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>null</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vc</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>pty</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>dev</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>file</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>pipe</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>stdio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>udp</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>tcp</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>unix</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>qemu-vdagent</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>dbus</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </console>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   </devices>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <features>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <gic supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <vmcoreinfo supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <genid supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <backingStoreInput supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <backup supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <async-teardown supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <s390-pv supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <ps2 supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <tdx supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <sev supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <sgx supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <hyperv supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='features'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>relaxed</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vapic</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>spinlocks</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vpindex</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>runtime</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>synic</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>stimer</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>reset</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vendor_id</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>frequencies</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>reenlightenment</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>tlbflush</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>ipi</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>avic</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>emsr_bitmap</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>xmm_input</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <defaults>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <spinlocks>4095</spinlocks>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <stimer_direct>on</stimer_direct>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <tlbflush_direct>on</tlbflush_direct>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <tlbflush_extended>on</tlbflush_extended>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </defaults>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </hyperv>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <launchSecurity supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   </features>
Feb 24 15:36:39 compute-0 nova_compute[188703]: </domainCapabilities>
Feb 24 15:36:39 compute-0 nova_compute[188703]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:38.959 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 24 15:36:39 compute-0 nova_compute[188703]: <domainCapabilities>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <path>/usr/libexec/qemu-kvm</path>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <domain>kvm</domain>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <arch>x86_64</arch>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <vcpu max='240'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <iothreads supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <os supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <enum name='firmware'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <loader supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>rom</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>pflash</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='readonly'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>yes</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>no</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='secure'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>no</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </loader>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   </os>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <cpu>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <mode name='host-passthrough' supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='hostPassthroughMigratable'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>on</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>off</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <mode name='maximum' supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='maximumMigratable'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>on</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>off</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <mode name='host-model' supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <vendor>AMD</vendor>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='x2apic'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc-deadline'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='hypervisor'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc_adjust'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='spec-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='stibp'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='ssbd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='cmp_legacy'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='overflow-recov'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='succor'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='ibrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='amd-ssbd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='virt-ssbd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='lbrv'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='tsc-scale'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='vmcb-clean'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='flushbyasid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='pause-filter'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='pfthreshold'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='svme-addr-chk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <feature policy='disable' name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <mode name='custom' supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Broadwell'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Broadwell-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Broadwell-noTSX'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Broadwell-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cascadelake-Server-v5'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='ClearwaterForest'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bhi-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ddpd-u'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sha512'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sm3'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sm4'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='ClearwaterForest-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bhi-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ddpd-u'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sha512'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sm3'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sm4'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cooperlake'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cooperlake-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Cooperlake-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Denverton'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Denverton-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Denverton-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Denverton-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Dhyana-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Genoa-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Milan-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Rome-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Turin'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='prefetchi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbpb'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-Turin-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amd-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='auto-ibrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vp2intersect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fs-gs-base-ns'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibpb-brtype'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='no-nested-data-bp'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='null-sel-clr-base'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='perfmon-v2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='prefetchi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbpb'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='srso-user-kernel-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='stibp-always-on'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='EPYC-v5'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx10'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx10-128'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx10-256'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx10-512'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='GraniteRapids-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx10'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx10-128'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx10-256'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx10-512'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='prefetchiti'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Haswell'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Haswell-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Haswell-noTSX'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Haswell-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Haswell-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Haswell-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Haswell-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-noTSX'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v5'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v6'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Icelake-Server-v7'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='IvyBridge'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='IvyBridge-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='KnightsMill'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512er'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512pf'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='KnightsMill-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-4fmaps'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-4vnniw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512er'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512pf'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Opteron_G4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Opteron_G4-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Opteron_G5'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tbm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Opteron_G5-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fma4'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tbm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xop'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='SapphireRapids-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='amx-tile'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-bf16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-fp16'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512-vpopcntdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bitalg'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vbmi2'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrc'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fzrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='la57'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='taa-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='tsx-ldtrk'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='SierraForest'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='SierraForest-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ifma'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-ne-convert'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx-vnni-int8'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bhi-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='bus-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cmpccxadd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fbsdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='fsrs'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ibrs-all'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='intel-psfd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ipred-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='lam'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mcdt-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pbrsb-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='psdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rrsba-ctrl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='sbdr-ssdp-no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='serialize'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vaes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='vpclmulqdq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Client-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='hle'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='rtm'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Skylake-Server-v5'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512bw'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512cd'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512dq'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512f'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='avx512vl'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='invpcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pcid'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='pku'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='mpx'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v2'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v3'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='core-capability'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='split-lock-detect'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='Snowridge-v4'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='cldemote'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='erms'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='gfni'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdir64b'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='movdiri'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='xsaves'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='athlon'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='athlon-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='core2duo'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='core2duo-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='coreduo'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='coreduo-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='n270'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='n270-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='ss'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='phenom'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <blockers model='phenom-v1'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnow'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <feature name='3dnowext'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </blockers>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </mode>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   </cpu>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <memoryBacking supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <enum name='sourceType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <value>file</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <value>anonymous</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <value>memfd</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   </memoryBacking>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <devices>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <disk supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='diskDevice'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>disk</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>cdrom</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>floppy</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>lun</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='bus'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>ide</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>fdc</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>scsi</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>sata</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio-transitional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio-non-transitional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <graphics supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vnc</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>egl-headless</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>dbus</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </graphics>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <video supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='modelType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vga</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>cirrus</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>none</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>bochs</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>ramfb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </video>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <hostdev supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='mode'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>subsystem</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='startupPolicy'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>default</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>mandatory</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>requisite</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>optional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='subsysType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>pci</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>scsi</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='capsType'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='pciBackend'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </hostdev>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <rng supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio-transitional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtio-non-transitional</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>random</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>egd</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>builtin</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </rng>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <filesystem supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='driverType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>path</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>handle</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>virtiofs</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </filesystem>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <tpm supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>tpm-tis</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>tpm-crb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>emulator</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>external</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendVersion'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>2.0</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </tpm>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <redirdev supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='bus'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>usb</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </redirdev>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <channel supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>pty</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>unix</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </channel>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <crypto supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>qemu</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendModel'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>builtin</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </crypto>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <interface supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='backendType'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>default</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>passt</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </interface>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <panic supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='model'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>isa</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>hyperv</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </panic>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <console supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='type'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>null</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vc</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>pty</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>dev</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>file</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>pipe</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>stdio</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>udp</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>tcp</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>unix</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>qemu-vdagent</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>dbus</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </console>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   </devices>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   <features>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <gic supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <vmcoreinfo supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <genid supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <backingStoreInput supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <backup supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <async-teardown supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <s390-pv supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <ps2 supported='yes'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <tdx supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <sev supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <sgx supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <hyperv supported='yes'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <enum name='features'>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>relaxed</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vapic</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>spinlocks</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vpindex</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>runtime</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>synic</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>stimer</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>reset</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>vendor_id</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>frequencies</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>reenlightenment</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>tlbflush</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>ipi</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>avic</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>emsr_bitmap</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <value>xmm_input</value>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </enum>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       <defaults>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <spinlocks>4095</spinlocks>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <stimer_direct>on</stimer_direct>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <tlbflush_direct>on</tlbflush_direct>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <tlbflush_extended>on</tlbflush_extended>
Feb 24 15:36:39 compute-0 nova_compute[188703]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 24 15:36:39 compute-0 nova_compute[188703]:       </defaults>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     </hyperv>
Feb 24 15:36:39 compute-0 nova_compute[188703]:     <launchSecurity supported='no'/>
Feb 24 15:36:39 compute-0 nova_compute[188703]:   </features>
Feb 24 15:36:39 compute-0 nova_compute[188703]: </domainCapabilities>
Feb 24 15:36:39 compute-0 nova_compute[188703]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.043 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.044 188707 INFO nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Secure Boot support detected
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.047 188707 INFO nova.virt.libvirt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.047 188707 INFO nova.virt.libvirt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.057 188707 DEBUG nova.virt.libvirt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.077 188707 INFO nova.virt.node [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Determined node identity 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from /var/lib/nova/compute_id
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.093 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Verified node 3c29c547-d990-4bd5-9bfd-810bbeade4e4 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.122 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.254 188707 DEBUG oslo_concurrency.lockutils [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.255 188707 DEBUG oslo_concurrency.lockutils [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.255 188707 DEBUG oslo_concurrency.lockutils [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.256 188707 DEBUG nova.compute.resource_tracker [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.444 188707 WARNING nova.virt.libvirt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.445 188707 DEBUG nova.compute.resource_tracker [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6018MB free_disk=72.48983001708984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.445 188707 DEBUG oslo_concurrency.lockutils [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:36:39 compute-0 nova_compute[188703]: 2026-02-24 15:36:39.445 188707 DEBUG oslo_concurrency.lockutils [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.138 188707 DEBUG nova.compute.resource_tracker [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.139 188707 DEBUG nova.compute.resource_tracker [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.411 188707 DEBUG nova.scheduler.client.report [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.451 188707 DEBUG nova.scheduler.client.report [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.451 188707 DEBUG nova.compute.provider_tree [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.485 188707 DEBUG nova.scheduler.client.report [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.529 188707 DEBUG nova.scheduler.client.report [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.583 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 24 15:36:40 compute-0 nova_compute[188703]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.583 188707 INFO nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] kernel doesn't support AMD SEV
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.584 188707 DEBUG nova.compute.provider_tree [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.584 188707 DEBUG nova.virt.libvirt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.600 188707 DEBUG nova.scheduler.client.report [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.618 188707 DEBUG nova.compute.resource_tracker [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.618 188707 DEBUG oslo_concurrency.lockutils [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.618 188707 DEBUG nova.service [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.645 188707 DEBUG nova.service [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 24 15:36:40 compute-0 nova_compute[188703]: 2026-02-24 15:36:40.646 188707 DEBUG nova.servicegroup.drivers.db [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 24 15:36:43 compute-0 sshd-session[189026]: Accepted publickey for zuul from 192.168.122.30 port 56628 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:36:43 compute-0 systemd-logind[813]: New session 25 of user zuul.
Feb 24 15:36:43 compute-0 systemd[1]: Started Session 25 of User zuul.
Feb 24 15:36:43 compute-0 sshd-session[189026]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:36:43 compute-0 rsyslogd[1018]: imjournal from <np0005628225:systemd-logind>: begin to drop messages due to rate-limiting
Feb 24 15:36:44 compute-0 python3.9[189179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:36:45 compute-0 sudo[189333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvbbmrxoivsftqtlirezyqjirueemha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947405.1522675-31-139400876102314/AnsiballZ_systemd_service.py'
Feb 24 15:36:45 compute-0 sudo[189333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:46 compute-0 python3.9[189336]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:36:46 compute-0 systemd[1]: Reloading.
Feb 24 15:36:46 compute-0 systemd-rc-local-generator[189360]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:36:46 compute-0 systemd-sysv-generator[189365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:36:46 compute-0 sudo[189333]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:47 compute-0 python3.9[189528]: ansible-ansible.builtin.service_facts Invoked
Feb 24 15:36:47 compute-0 network[189545]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 24 15:36:47 compute-0 network[189546]: 'network-scripts' will be removed from distribution in near future.
Feb 24 15:36:47 compute-0 network[189547]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 24 15:36:50 compute-0 sudo[189818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihruuyskjmgelrvduecutouoisnxuaeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947410.2844074-50-98561131331928/AnsiballZ_systemd_service.py'
Feb 24 15:36:50 compute-0 sudo[189818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:50 compute-0 python3.9[189821]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:36:51 compute-0 sudo[189818]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:51 compute-0 sudo[189972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exmtyilnpkszozhfnkcotfotkmlmfarc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947411.337788-60-166504530853786/AnsiballZ_file.py'
Feb 24 15:36:51 compute-0 sudo[189972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:52 compute-0 python3.9[189975]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:52 compute-0 sudo[189972]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:52 compute-0 rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:36:52 compute-0 rsyslogd[1018]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:36:52 compute-0 sudo[190126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajusrbvmghybjaccubzsudccvildtkna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947412.2639277-68-226823949271838/AnsiballZ_file.py'
Feb 24 15:36:52 compute-0 sudo[190126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:52 compute-0 python3.9[190129]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:36:52 compute-0 sudo[190126]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:53 compute-0 sudo[190279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plndqqthaoqbgohavkgwdbkdzaacracb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947413.0205426-77-17119603635061/AnsiballZ_command.py'
Feb 24 15:36:53 compute-0 sudo[190279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:53 compute-0 python3.9[190282]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:36:53 compute-0 sudo[190279]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:54 compute-0 python3.9[190434]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 24 15:36:55 compute-0 sudo[190584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhkvcitrtbutqzazjsthqsfsbrgmgnpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947414.9770803-95-90104352492438/AnsiballZ_systemd_service.py'
Feb 24 15:36:55 compute-0 sudo[190584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:55 compute-0 python3.9[190587]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:36:55 compute-0 systemd[1]: Reloading.
Feb 24 15:36:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:36:55.689 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:36:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:36:55.691 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:36:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:36:55.691 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:36:55 compute-0 systemd-rc-local-generator[190613]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:36:55 compute-0 systemd-sysv-generator[190619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:36:56 compute-0 sudo[190584]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:56 compute-0 sudo[190779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swibbgodrvqllpvafiwcnrgdvzdzqvzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947416.56886-103-142229772928168/AnsiballZ_command.py'
Feb 24 15:36:56 compute-0 sudo[190779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:57 compute-0 python3.9[190782]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:36:57 compute-0 sudo[190779]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:57 compute-0 sudo[190933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znhmvafjtltdwtbgjtflonxhlajexvoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947417.3666682-112-89402269660558/AnsiballZ_file.py'
Feb 24 15:36:57 compute-0 sudo[190933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:57 compute-0 python3.9[190936]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:36:57 compute-0 sudo[190933]: pam_unix(sudo:session): session closed for user root
Feb 24 15:36:58 compute-0 podman[191060]: 2026-02-24 15:36:58.544330896 +0000 UTC m=+0.110132097 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 15:36:58 compute-0 python3.9[191094]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:36:59 compute-0 sudo[191258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcrbreaixefaegzwdtrdtyydtbwdcbfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947418.8551822-128-82281179829708/AnsiballZ_group.py'
Feb 24 15:36:59 compute-0 sudo[191258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:36:59 compute-0 python3.9[191261]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Feb 24 15:36:59 compute-0 sudo[191258]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:00 compute-0 sudo[191411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlcimisaanqyyqlimoeggffdpwdwxlhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947419.908364-139-277943494012870/AnsiballZ_getent.py'
Feb 24 15:37:00 compute-0 sudo[191411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:00 compute-0 python3.9[191414]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Feb 24 15:37:00 compute-0 sudo[191411]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:01 compute-0 sudo[191565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdsspokzoyfusiyjknboqzzzpblxqhyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947420.7365558-147-155612641308401/AnsiballZ_group.py'
Feb 24 15:37:01 compute-0 sudo[191565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:01 compute-0 python3.9[191568]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 24 15:37:01 compute-0 groupadd[191569]: group added to /etc/group: name=ceilometer, GID=42405
Feb 24 15:37:01 compute-0 groupadd[191569]: group added to /etc/gshadow: name=ceilometer
Feb 24 15:37:01 compute-0 groupadd[191569]: new group: name=ceilometer, GID=42405
Feb 24 15:37:01 compute-0 sudo[191565]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:02 compute-0 sudo[191724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfaocjhvnbwfroiikwkolnhbilgnaynx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947421.5220416-155-128483246773841/AnsiballZ_user.py'
Feb 24 15:37:02 compute-0 sudo[191724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:02 compute-0 python3.9[191727]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 24 15:37:02 compute-0 useradd[191729]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/1
Feb 24 15:37:02 compute-0 useradd[191729]: add 'ceilometer' to group 'libvirt'
Feb 24 15:37:02 compute-0 useradd[191729]: add 'ceilometer' to shadow group 'libvirt'
Feb 24 15:37:02 compute-0 sudo[191724]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:03 compute-0 python3.9[191885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:04 compute-0 python3.9[192006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771947423.0605233-181-219197838848418/.source.conf _original_basename=ceilometer.conf follow=False checksum=5c6a9288d15d1b05b1484826ce363ad306e9930c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:04 compute-0 python3.9[192156]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:05 compute-0 python3.9[192277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771947424.5024073-181-97654091749362/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:06 compute-0 python3.9[192427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:06 compute-0 python3.9[192548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771947425.731954-181-275624076013101/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:07 compute-0 podman[192672]: 2026-02-24 15:37:07.468435323 +0000 UTC m=+0.112647667 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 24 15:37:07 compute-0 python3.9[192709]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:37:08 compute-0 python3.9[192878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:37:09 compute-0 python3.9[193030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:09 compute-0 python3.9[193151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947428.5509915-240-115662565993174/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:10 compute-0 python3.9[193301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:11 compute-0 python3.9[193422]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947429.9207635-240-110237296434027/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:11 compute-0 python3.9[193572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:12 compute-0 python3.9[193693]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947431.3696485-269-239482984720497/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:13 compute-0 python3.9[193843]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:13 compute-0 python3.9[193964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947432.8079906-285-81601781903322/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:14 compute-0 python3.9[194114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:15 compute-0 python3.9[194235]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947434.0732944-300-179451922188330/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:16 compute-0 python3.9[194385]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:16 compute-0 python3.9[194506]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947435.5625682-315-48075881017387/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:17 compute-0 sudo[194656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivroxapqrchjpnuybbyezhrwaxlvkujx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947436.8247435-330-183543724932653/AnsiballZ_file.py'
Feb 24 15:37:17 compute-0 sudo[194656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:17 compute-0 python3.9[194659]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:17 compute-0 sudo[194656]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:17 compute-0 sudo[194809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siqbibpqexvoqqneiwelsvsvjtnfwjss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947437.5458097-338-120122021611210/AnsiballZ_file.py'
Feb 24 15:37:17 compute-0 sudo[194809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:18 compute-0 python3.9[194812]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:18 compute-0 sudo[194809]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:18 compute-0 python3.9[194962]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:37:19 compute-0 python3.9[195114]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:37:19 compute-0 python3.9[195266]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:37:20 compute-0 sudo[195418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aksctihqujapvfzdtcbcvzepthetgqta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947440.2392256-370-134563258168594/AnsiballZ_file.py'
Feb 24 15:37:20 compute-0 sudo[195418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:20 compute-0 python3.9[195421]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:20 compute-0 sudo[195418]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:21 compute-0 sudo[195571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivbjuoawcajmfrrchbxqwhwvxhgpyrut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947440.9588232-378-145620723963529/AnsiballZ_systemd_service.py'
Feb 24 15:37:21 compute-0 sudo[195571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:21 compute-0 python3.9[195574]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:37:21 compute-0 systemd[1]: Reloading.
Feb 24 15:37:21 compute-0 systemd-rc-local-generator[195602]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:37:21 compute-0 systemd-sysv-generator[195609]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:37:21 compute-0 systemd[1]: Listening on Podman API Socket.
Feb 24 15:37:22 compute-0 sudo[195571]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:22 compute-0 sudo[195771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooytajlkcgkxetmtawzyugwpymphmrbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947442.422136-387-106587186832265/AnsiballZ_stat.py'
Feb 24 15:37:22 compute-0 sudo[195771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:22 compute-0 python3.9[195774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:22 compute-0 sudo[195771]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:23 compute-0 sudo[195895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eocksixrrcjktmwbkgmhmmhhinjjeydx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947442.422136-387-106587186832265/AnsiballZ_copy.py'
Feb 24 15:37:23 compute-0 sudo[195895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:23 compute-0 python3.9[195898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947442.422136-387-106587186832265/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:23 compute-0 sudo[195895]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:23 compute-0 sudo[195972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxrbghrqkhnaqyurfsrikphmccrifqjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947442.422136-387-106587186832265/AnsiballZ_stat.py'
Feb 24 15:37:23 compute-0 sudo[195972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:24 compute-0 python3.9[195975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:24 compute-0 sudo[195972]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:24 compute-0 sudo[196096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-altbyazqbmjfhtgygerjoroqxvbkhzpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947442.422136-387-106587186832265/AnsiballZ_copy.py'
Feb 24 15:37:24 compute-0 sudo[196096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:24 compute-0 python3.9[196099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947442.422136-387-106587186832265/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:24 compute-0 sudo[196096]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:25 compute-0 sudo[196249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phibkemqfifuffiztwrwucbljvtfxjno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947445.3697462-419-260244492644355/AnsiballZ_file.py'
Feb 24 15:37:25 compute-0 sudo[196249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:25 compute-0 python3.9[196252]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:25 compute-0 sudo[196249]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:26 compute-0 sudo[196402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czactzragkhdfgwoaiqrhmttxrtzikhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947446.1818838-427-67397041130910/AnsiballZ_file.py'
Feb 24 15:37:26 compute-0 sudo[196402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:26 compute-0 python3.9[196405]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:26 compute-0 sudo[196402]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:27 compute-0 sudo[196555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsykxpgxnpdyzfwcesirzgxtvcyduxae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947446.852086-435-181783769195019/AnsiballZ_stat.py'
Feb 24 15:37:27 compute-0 sudo[196555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:27 compute-0 python3.9[196558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:27 compute-0 sudo[196555]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:27 compute-0 sudo[196679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wagjirynjdggcallcedujuyrslhuvdwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947446.852086-435-181783769195019/AnsiballZ_copy.py'
Feb 24 15:37:27 compute-0 sudo[196679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:27 compute-0 python3.9[196682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947446.852086-435-181783769195019/.source.json _original_basename=.4fks1vbf follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:27 compute-0 sudo[196679]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:28 compute-0 python3.9[196832]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:28 compute-0 podman[196833]: 2026-02-24 15:37:28.741205382 +0000 UTC m=+0.071143405 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 15:37:30 compute-0 sudo[197272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctuyftdvwjuravaciegeosbmkhhxjpek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947450.4190648-475-194559395439355/AnsiballZ_container_config_data.py'
Feb 24 15:37:30 compute-0 sudo[197272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:31 compute-0 python3.9[197275]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Feb 24 15:37:31 compute-0 sudo[197272]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:31 compute-0 sudo[197425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwchddjbfqbdtzigocirparzusximuhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947451.4537637-486-241691395091114/AnsiballZ_container_config_hash.py'
Feb 24 15:37:31 compute-0 sudo[197425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:32 compute-0 python3.9[197428]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:37:32 compute-0 sudo[197425]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:32 compute-0 sudo[197578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izcvfuavbfqjhqkrcorgcutxbjxcwdfq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947452.4861703-496-213717420026026/AnsiballZ_edpm_container_manage.py'
Feb 24 15:37:32 compute-0 sudo[197578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:33 compute-0 python3[197581]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:37:33 compute-0 podman[197617]: 2026-02-24 15:37:33.406795166 +0000 UTC m=+0.069867709 container create 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 24 15:37:33 compute-0 podman[197617]: 2026-02-24 15:37:33.369526782 +0000 UTC m=+0.032599385 image pull 2de33f14cfa6ceeafb6b935f1a9771276123e54a116dc534ba3482038b9ef693 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Feb 24 15:37:33 compute-0 python3[197581]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Feb 24 15:37:33 compute-0 sudo[197578]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:34 compute-0 sudo[197804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-papeywrntxvynlgwnjucfbaraddxdrne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947453.7673452-504-220707064607241/AnsiballZ_stat.py'
Feb 24 15:37:34 compute-0 sudo[197804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:34 compute-0 python3.9[197807]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:37:34 compute-0 sudo[197804]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:34 compute-0 sudo[197959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwxyfduqwloexawcwrkpigadlxyzuhiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947454.6341927-513-100506622402383/AnsiballZ_file.py'
Feb 24 15:37:34 compute-0 sudo[197959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:35 compute-0 python3.9[197962]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:35 compute-0 sudo[197959]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:35 compute-0 sudo[198036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwfnuodztefplwsgbkfnbzosvitjmfsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947454.6341927-513-100506622402383/AnsiballZ_stat.py'
Feb 24 15:37:35 compute-0 sudo[198036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:35 compute-0 python3.9[198039]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:37:35 compute-0 sudo[198036]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:36 compute-0 sudo[198188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxwxwwpdctqhyptqvhnfpkysyyvlbvpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947455.654226-513-130460430313482/AnsiballZ_copy.py'
Feb 24 15:37:36 compute-0 sudo[198188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:36 compute-0 python3.9[198191]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771947455.654226-513-130460430313482/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:36 compute-0 sudo[198188]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:36 compute-0 sudo[198265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iudyhckjtdewyrwweqawsgfnghhribov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947455.654226-513-130460430313482/AnsiballZ_systemd.py'
Feb 24 15:37:36 compute-0 sudo[198265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:37 compute-0 python3.9[198268]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:37:37 compute-0 systemd[1]: Reloading.
Feb 24 15:37:37 compute-0 systemd-sysv-generator[198299]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:37:37 compute-0 systemd-rc-local-generator[198294]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:37:37 compute-0 sudo[198265]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:37 compute-0 podman[198312]: 2026-02-24 15:37:37.641114176 +0000 UTC m=+0.111003360 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.648 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.675 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 sudo[198411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcbksqauyeqlxocyzvmoadqccozlkgvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947455.654226-513-130460430313482/AnsiballZ_systemd.py'
Feb 24 15:37:37 compute-0 sudo[198411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.946 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.946 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.947 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.966 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.966 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.966 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.968 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.968 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.969 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.969 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.970 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:37:37 compute-0 nova_compute[188703]: 2026-02-24 15:37:37.971 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.003 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.004 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.004 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.004 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:37:38 compute-0 python3.9[198414]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.170 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.172 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6019MB free_disk=72.48961639404297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.172 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.172 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:37:38 compute-0 systemd[1]: Reloading.
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.249 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.250 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:37:38 compute-0 systemd-rc-local-generator[198435]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:37:38 compute-0 systemd-sysv-generator[198440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.286 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.305 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.308 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:37:38 compute-0 nova_compute[188703]: 2026-02-24 15:37:38.308 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:37:38 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Feb 24 15:37:38 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d76675029afcf9f6f278e333ab9728217548e4452d86a48515d81e5b681c4f/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 24 15:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d76675029afcf9f6f278e333ab9728217548e4452d86a48515d81e5b681c4f/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Feb 24 15:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d76675029afcf9f6f278e333ab9728217548e4452d86a48515d81e5b681c4f/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Feb 24 15:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d76675029afcf9f6f278e333ab9728217548e4452d86a48515d81e5b681c4f/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Feb 24 15:37:38 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda.
Feb 24 15:37:38 compute-0 podman[198460]: 2026-02-24 15:37:38.605330814 +0000 UTC m=+0.153452668 container init 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: + sudo -E kolla_set_configs
Feb 24 15:37:38 compute-0 podman[198460]: 2026-02-24 15:37:38.635516971 +0000 UTC m=+0.183638825 container start 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true)
Feb 24 15:37:38 compute-0 podman[198460]: ceilometer_agent_compute
Feb 24 15:37:38 compute-0 sudo[198481]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: sudo: unable to send audit message: Operation not permitted
Feb 24 15:37:38 compute-0 sudo[198481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 24 15:37:38 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Feb 24 15:37:38 compute-0 sudo[198411]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Validating config file
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Copying service configuration files
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: INFO:__main__:Writing out command to execute
Feb 24 15:37:38 compute-0 sudo[198481]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: ++ cat /run_command
Feb 24 15:37:38 compute-0 podman[198482]: 2026-02-24 15:37:38.736370329 +0000 UTC m=+0.090797629 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: + ARGS=
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: + sudo kolla_copy_cacerts
Feb 24 15:37:38 compute-0 systemd[1]: 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda-67e2ff5b7ab785f1.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:37:38 compute-0 systemd[1]: 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda-67e2ff5b7ab785f1.service: Failed with result 'exit-code'.
Feb 24 15:37:38 compute-0 sudo[198519]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: sudo: unable to send audit message: Operation not permitted
Feb 24 15:37:38 compute-0 sudo[198519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 24 15:37:38 compute-0 sudo[198519]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: + [[ ! -n '' ]]
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: + . kolla_extend_start
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: + umask 0022
Feb 24 15:37:38 compute-0 ceilometer_agent_compute[198475]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.425 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.425 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.425 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.425 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.425 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.426 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.427 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.428 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.429 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.430 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.431 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.432 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.433 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.434 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.435 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.436 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.436 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.436 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.436 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.436 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.436 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.436 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.436 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.455 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.456 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.456 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.456 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.456 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.456 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.456 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.457 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.457 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.457 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.457 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.457 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.457 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.457 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.457 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.457 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.458 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.458 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.458 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.458 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.458 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.458 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.458 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.458 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.458 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.459 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.460 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.461 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.462 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.463 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.464 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.464 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.464 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.464 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.464 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.464 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.464 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.464 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.464 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.465 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.466 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.467 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.467 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.467 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.467 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.467 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.469 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.469 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.469 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.469 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.470 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.470 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.470 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.470 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.470 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.470 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.471 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.471 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.471 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.471 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.471 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.471 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.471 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.471 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.472 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.472 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.474 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.477 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.477 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Feb 24 15:37:39 compute-0 python3.9[198656]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.685 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.696 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.697 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.697 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.798 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.799 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.800 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.801 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.802 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.803 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.808 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.810 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.825 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.825 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.825 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.826 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.826 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.827 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.827 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.827 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.831 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.832 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.833 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.834 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.835 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fe9e7bf0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.836 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.836 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.837 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.837 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.837 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.838 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.838 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.838 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.839 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.839 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.839 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.839 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.841 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.841 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.841 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.841 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.842 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:37:39.846 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:37:40 compute-0 sudo[198817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esocapovzdpntzihedzizsrpigkmiiaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947460.1596017-558-62543257989581/AnsiballZ_stat.py'
Feb 24 15:37:40 compute-0 sudo[198817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:40 compute-0 python3.9[198820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:40 compute-0 sudo[198817]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:41 compute-0 sudo[198943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbxdfodhkzdsaggosoeyodrkxjgxblxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947460.1596017-558-62543257989581/AnsiballZ_copy.py'
Feb 24 15:37:41 compute-0 sudo[198943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:41 compute-0 python3.9[198946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947460.1596017-558-62543257989581/.source.yaml _original_basename=.52bo9eca follow=False checksum=f7b3973a6f229b37e077caab4150475e93f2b6c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:41 compute-0 sudo[198943]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:41 compute-0 sudo[199096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqxaoailmzzuplqlmmmramnocifovlpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947461.4664729-573-222999864303297/AnsiballZ_stat.py'
Feb 24 15:37:41 compute-0 sudo[199096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:41 compute-0 python3.9[199099]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:41 compute-0 sudo[199096]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:42 compute-0 sudo[199220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlvinekdslmxzuqhkkhrelaeltglqmcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947461.4664729-573-222999864303297/AnsiballZ_copy.py'
Feb 24 15:37:42 compute-0 sudo[199220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:42 compute-0 python3.9[199223]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947461.4664729-573-222999864303297/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:42 compute-0 sudo[199220]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:43 compute-0 sudo[199373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swnhretiolacgvequpjrijlchcgwrdav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947463.2791362-594-4659068138906/AnsiballZ_file.py'
Feb 24 15:37:43 compute-0 sudo[199373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:43 compute-0 python3.9[199376]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:43 compute-0 sudo[199373]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:44 compute-0 sudo[199526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpafjnkkkipdkiczhabucidmakybyxld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947463.9934008-602-247338551444524/AnsiballZ_file.py'
Feb 24 15:37:44 compute-0 sudo[199526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:44 compute-0 python3.9[199529]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:44 compute-0 sudo[199526]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:45 compute-0 sudo[199679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whyphxtstrvulkvjtemetxrnfdkcslac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947464.7444644-610-66357045052767/AnsiballZ_stat.py'
Feb 24 15:37:45 compute-0 sudo[199679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:45 compute-0 python3.9[199682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:45 compute-0 sudo[199679]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:45 compute-0 sudo[199758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdgixnsngihvmictclsbznwfoovupfvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947464.7444644-610-66357045052767/AnsiballZ_file.py'
Feb 24 15:37:45 compute-0 sudo[199758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:45 compute-0 python3.9[199761]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.0fc5mk37 recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:45 compute-0 sudo[199758]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:46 compute-0 python3.9[199911]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:48 compute-0 sudo[200332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpstuomwipupcmhoyzyfphxighxnzujb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947468.1470058-647-141470088913566/AnsiballZ_container_config_data.py'
Feb 24 15:37:48 compute-0 sudo[200332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:48 compute-0 python3.9[200335]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Feb 24 15:37:48 compute-0 sudo[200332]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:49 compute-0 sudo[200485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlmnpjxzplfewiphrhigycnlmodfcaja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947469.1251051-658-111728405319725/AnsiballZ_container_config_hash.py'
Feb 24 15:37:49 compute-0 sudo[200485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:49 compute-0 python3.9[200488]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:37:49 compute-0 sudo[200485]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:50 compute-0 sudo[200638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-admhwwbvbelftmzudnipsaukhlsruhao ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947470.101115-668-88571417748392/AnsiballZ_edpm_container_manage.py'
Feb 24 15:37:50 compute-0 sudo[200638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:50 compute-0 python3[200641]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:37:50 compute-0 podman[200678]: 2026-02-24 15:37:50.891064592 +0000 UTC m=+0.055798520 container create 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 15:37:50 compute-0 podman[200678]: 2026-02-24 15:37:50.857625532 +0000 UTC m=+0.022359510 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Feb 24 15:37:50 compute-0 python3[200641]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /:/rootfs:ro --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl --path.rootfs=/rootfs
Feb 24 15:37:51 compute-0 sudo[200638]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:51 compute-0 sudo[200866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebwoiuycgkockbmxckcrubjhsbevoqlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947471.2732863-676-206997487465221/AnsiballZ_stat.py'
Feb 24 15:37:51 compute-0 sudo[200866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:51 compute-0 python3.9[200869]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:37:51 compute-0 sudo[200866]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:52 compute-0 sudo[201021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnsascaruigydhczblrvcvzywbkiefyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947472.0677767-685-202409767937412/AnsiballZ_file.py'
Feb 24 15:37:52 compute-0 sudo[201021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:52 compute-0 python3.9[201024]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:52 compute-0 sudo[201021]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:52 compute-0 sudo[201098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zowtujuweqedgbruwfglsmdpngikgvkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947472.0677767-685-202409767937412/AnsiballZ_stat.py'
Feb 24 15:37:52 compute-0 sudo[201098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:52 compute-0 python3.9[201101]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:37:53 compute-0 sudo[201098]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:53 compute-0 sudo[201250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbvjperrjilrejmmvuagzxmzjfdgavbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947473.076994-685-6970833550029/AnsiballZ_copy.py'
Feb 24 15:37:53 compute-0 sudo[201250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:53 compute-0 python3.9[201253]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771947473.076994-685-6970833550029/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:53 compute-0 sudo[201250]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:53 compute-0 sudo[201327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxyxzzpkffaayfikkdeauzsbcwgidwgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947473.076994-685-6970833550029/AnsiballZ_systemd.py'
Feb 24 15:37:53 compute-0 sudo[201327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:54 compute-0 python3.9[201330]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:37:54 compute-0 systemd[1]: Reloading.
Feb 24 15:37:54 compute-0 systemd-rc-local-generator[201365]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:37:54 compute-0 systemd-sysv-generator[201368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:37:54 compute-0 sudo[201327]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:54 compute-0 sudo[201446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acjuxslqhnremdepjzurwhogkmdydguv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947473.076994-685-6970833550029/AnsiballZ_systemd.py'
Feb 24 15:37:54 compute-0 sudo[201446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:55 compute-0 python3.9[201449]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:37:55 compute-0 systemd[1]: Reloading.
Feb 24 15:37:55 compute-0 systemd-sysv-generator[201477]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:37:55 compute-0 systemd-rc-local-generator[201471]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:37:55 compute-0 systemd[1]: Starting node_exporter container...
Feb 24 15:37:55 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:37:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:37:55.690 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:37:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:37:55.691 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:37:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:37:55.691 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07068708e2000f9bf8c165ad5edbc77bdd1d1b642b39cd704590f87644266e3b/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 24 15:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07068708e2000f9bf8c165ad5edbc77bdd1d1b642b39cd704590f87644266e3b/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Feb 24 15:37:55 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a.
Feb 24 15:37:55 compute-0 podman[201496]: 2026-02-24 15:37:55.905365515 +0000 UTC m=+0.319863029 container init 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.922Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.923Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.923Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.923Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.923Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.924Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.924Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.924Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.924Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.924Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=arp
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=bcache
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=bonding
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=btrfs
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=conntrack
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=cpu
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=cpufreq
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=diskstats
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=edac
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=fibrechannel
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=filefd
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=filesystem
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=infiniband
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=ipvs
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=loadavg
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=mdadm
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=meminfo
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=netclass
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=netdev
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=netstat
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=nfs
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=nfsd
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=nvme
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=schedstat
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=sockstat
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=softnet
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=systemd
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=tapestats
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=udp_queues
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=vmstat
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=xfs
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.925Z caller=node_exporter.go:117 level=info collector=zfs
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.926Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Feb 24 15:37:55 compute-0 node_exporter[201511]: ts=2026-02-24T15:37:55.926Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Feb 24 15:37:55 compute-0 podman[201496]: 2026-02-24 15:37:55.930984127 +0000 UTC m=+0.345481621 container start 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:37:55 compute-0 podman[201496]: node_exporter
Feb 24 15:37:55 compute-0 systemd[1]: Started node_exporter container.
Feb 24 15:37:56 compute-0 sudo[201446]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:56 compute-0 podman[201520]: 2026-02-24 15:37:56.019655202 +0000 UTC m=+0.074528517 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:37:56 compute-0 python3.9[201694]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:37:57 compute-0 sudo[201844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvsijoxlqrujbvszkjtlvycfcziobbsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947477.3731856-730-22307575195683/AnsiballZ_stat.py'
Feb 24 15:37:57 compute-0 sudo[201844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:57 compute-0 python3.9[201847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:57 compute-0 sudo[201844]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:58 compute-0 sudo[201970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwmytwdmliiaydhbynperazdoqiuixcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947477.3731856-730-22307575195683/AnsiballZ_copy.py'
Feb 24 15:37:58 compute-0 sudo[201970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:58 compute-0 python3.9[201973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947477.3731856-730-22307575195683/.source.yaml _original_basename=.alyqu359 follow=False checksum=20b79df375295e552442cfab298d2eb3e4dda4ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:37:58 compute-0 sudo[201970]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:59 compute-0 sudo[202134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdkehlcvcdkhiiusviauiihmtbuwtqzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947478.753618-745-270532483198435/AnsiballZ_stat.py'
Feb 24 15:37:59 compute-0 sudo[202134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:59 compute-0 podman[202097]: 2026-02-24 15:37:59.130909479 +0000 UTC m=+0.081307911 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 24 15:37:59 compute-0 python3.9[202142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:37:59 compute-0 sudo[202134]: pam_unix(sudo:session): session closed for user root
Feb 24 15:37:59 compute-0 sudo[202266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqgpkvbstniyspqebyhvjlimmrgsznhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947478.753618-745-270532483198435/AnsiballZ_copy.py'
Feb 24 15:37:59 compute-0 sudo[202266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:37:59 compute-0 python3.9[202269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947478.753618-745-270532483198435/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:37:59 compute-0 sudo[202266]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:00 compute-0 sudo[202419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diehsmsrbiekpxeejxutjckanjadhxnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947480.5410728-766-6235907326602/AnsiballZ_file.py'
Feb 24 15:38:00 compute-0 sudo[202419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:01 compute-0 python3.9[202422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:01 compute-0 sudo[202419]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:01 compute-0 sudo[202572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfbqqjqxspsdqblyacujfrrnipdpzfgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947481.2875755-774-167660411229691/AnsiballZ_file.py'
Feb 24 15:38:01 compute-0 sudo[202572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:01 compute-0 python3.9[202575]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:38:01 compute-0 sudo[202572]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:02 compute-0 sudo[202725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umuzmdxkochoqowduvlsiyfzpjjhjszs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947481.9822927-782-102295450844208/AnsiballZ_stat.py'
Feb 24 15:38:02 compute-0 sudo[202725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:02 compute-0 python3.9[202728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:38:02 compute-0 sudo[202725]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:02 compute-0 sudo[202804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdqhfvqajnjyrnkrutugampuddloyvau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947481.9822927-782-102295450844208/AnsiballZ_file.py'
Feb 24 15:38:02 compute-0 sudo[202804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:02 compute-0 python3.9[202807]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.udntxv6a recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:02 compute-0 sudo[202804]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:03 compute-0 python3.9[202957]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:05 compute-0 sudo[203378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkdjnwszkpwfqgbgzazpastffwdenruk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947485.0255706-819-137194056703378/AnsiballZ_container_config_data.py'
Feb 24 15:38:05 compute-0 sudo[203378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:05 compute-0 python3.9[203381]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Feb 24 15:38:05 compute-0 sudo[203378]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:06 compute-0 sudo[203531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlatniwhzhbpjyovboooqdbrlvearreh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947485.9723206-830-140781030473254/AnsiballZ_container_config_hash.py'
Feb 24 15:38:06 compute-0 sudo[203531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:06 compute-0 python3.9[203534]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:38:06 compute-0 sudo[203531]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:07 compute-0 sudo[203684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjrbqlekknykcoljjcpkoamilchrwglh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947486.8581717-840-154062511402350/AnsiballZ_edpm_container_manage.py'
Feb 24 15:38:07 compute-0 sudo[203684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:07 compute-0 python3[203687]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:38:08 compute-0 podman[203713]: 2026-02-24 15:38:08.172959663 +0000 UTC m=+0.133710802 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 15:38:08 compute-0 auditd[721]: Audit daemon rotating log files
Feb 24 15:38:08 compute-0 podman[203700]: 2026-02-24 15:38:08.911263848 +0000 UTC m=+1.338082265 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Feb 24 15:38:09 compute-0 podman[203823]: 2026-02-24 15:38:09.084032273 +0000 UTC m=+0.065450857 container create 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible)
Feb 24 15:38:09 compute-0 podman[203823]: 2026-02-24 15:38:09.051455694 +0000 UTC m=+0.032874318 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Feb 24 15:38:09 compute-0 python3[203687]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Feb 24 15:38:09 compute-0 podman[203827]: 2026-02-24 15:38:09.131274885 +0000 UTC m=+0.081380771 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 24 15:38:09 compute-0 systemd[1]: 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda-67e2ff5b7ab785f1.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:38:09 compute-0 systemd[1]: 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda-67e2ff5b7ab785f1.service: Failed with result 'exit-code'.
Feb 24 15:38:09 compute-0 sudo[203684]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:09 compute-0 sudo[204030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebolhpmcqjirlocsxxgfpzkpydgfdtuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947489.464027-848-133066491620024/AnsiballZ_stat.py'
Feb 24 15:38:09 compute-0 sudo[204030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:09 compute-0 python3.9[204033]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:38:10 compute-0 sudo[204030]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:10 compute-0 sudo[204185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlnmzwgxbtjjbfgcfipbghnypsofbgnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947490.3381302-857-10218145009574/AnsiballZ_file.py'
Feb 24 15:38:10 compute-0 sudo[204185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:10 compute-0 python3.9[204188]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:10 compute-0 sudo[204185]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:11 compute-0 sudo[204262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgqbgczugaijrrtgqamjpxhdiljsfexb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947490.3381302-857-10218145009574/AnsiballZ_stat.py'
Feb 24 15:38:11 compute-0 sudo[204262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:11 compute-0 python3.9[204265]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:38:11 compute-0 sudo[204262]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:11 compute-0 sudo[204414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqwbfzfwcxdcdemhgujythgdtrsjbhyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947491.3440597-857-205579021856604/AnsiballZ_copy.py'
Feb 24 15:38:11 compute-0 sudo[204414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:11 compute-0 python3.9[204417]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771947491.3440597-857-205579021856604/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:12 compute-0 sudo[204414]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:12 compute-0 sudo[204491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eerdvjcwviloedgufyabsclkedymzgly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947491.3440597-857-205579021856604/AnsiballZ_systemd.py'
Feb 24 15:38:12 compute-0 sudo[204491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:12 compute-0 python3.9[204494]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:38:12 compute-0 systemd[1]: Reloading.
Feb 24 15:38:12 compute-0 systemd-sysv-generator[204522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:38:12 compute-0 systemd-rc-local-generator[204516]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:38:12 compute-0 sudo[204491]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:13 compute-0 sudo[204609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqjzmvnadggrbzgdpkuoozzjpriskyqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947491.3440597-857-205579021856604/AnsiballZ_systemd.py'
Feb 24 15:38:13 compute-0 sudo[204609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:13 compute-0 python3.9[204612]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:38:13 compute-0 systemd[1]: Reloading.
Feb 24 15:38:13 compute-0 systemd-sysv-generator[204641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:38:13 compute-0 systemd-rc-local-generator[204638]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:38:13 compute-0 systemd[1]: Starting podman_exporter container...
Feb 24 15:38:14 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a9af3f42168df62fca0f834010a4a6c3e1a6edef035362860d77d0ed5f95a9/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 24 15:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a9af3f42168df62fca0f834010a4a6c3e1a6edef035362860d77d0ed5f95a9/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Feb 24 15:38:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9.
Feb 24 15:38:14 compute-0 podman[204659]: 2026-02-24 15:38:14.110233078 +0000 UTC m=+0.162045504 container init 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 15:38:14 compute-0 podman_exporter[204674]: ts=2026-02-24T15:38:14.137Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Feb 24 15:38:14 compute-0 podman_exporter[204674]: ts=2026-02-24T15:38:14.137Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Feb 24 15:38:14 compute-0 podman_exporter[204674]: ts=2026-02-24T15:38:14.137Z caller=handler.go:94 level=info msg="enabled collectors"
Feb 24 15:38:14 compute-0 podman_exporter[204674]: ts=2026-02-24T15:38:14.137Z caller=handler.go:105 level=info collector=container
Feb 24 15:38:14 compute-0 podman[204659]: 2026-02-24 15:38:14.144240203 +0000 UTC m=+0.196052609 container start 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 15:38:14 compute-0 systemd[1]: Starting Podman API Service...
Feb 24 15:38:14 compute-0 podman[204659]: podman_exporter
Feb 24 15:38:14 compute-0 systemd[1]: Started Podman API Service.
Feb 24 15:38:14 compute-0 systemd[1]: Started podman_exporter container.
Feb 24 15:38:14 compute-0 podman[204685]: time="2026-02-24T15:38:14Z" level=info msg="/usr/bin/podman filtering at log level info"
Feb 24 15:38:14 compute-0 podman[204685]: time="2026-02-24T15:38:14Z" level=info msg="Setting parallel job count to 25"
Feb 24 15:38:14 compute-0 podman[204685]: time="2026-02-24T15:38:14Z" level=info msg="Using sqlite as database backend"
Feb 24 15:38:14 compute-0 podman[204685]: time="2026-02-24T15:38:14Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Feb 24 15:38:14 compute-0 sudo[204609]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:14 compute-0 podman[204685]: time="2026-02-24T15:38:14Z" level=info msg="Using systemd socket activation to determine API endpoint"
Feb 24 15:38:14 compute-0 podman[204685]: time="2026-02-24T15:38:14Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Feb 24 15:38:14 compute-0 podman[204685]: @ - - [24/Feb/2026:15:38:14 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Feb 24 15:38:14 compute-0 podman[204685]: time="2026-02-24T15:38:14Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:38:14 compute-0 podman[204683]: 2026-02-24 15:38:14.233978896 +0000 UTC m=+0.077434641 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 15:38:14 compute-0 systemd[1]: 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9-3bfeebe28f9e7619.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:38:14 compute-0 systemd[1]: 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9-3bfeebe28f9e7619.service: Failed with result 'exit-code'.
Feb 24 15:38:14 compute-0 podman[204685]: @ - - [24/Feb/2026:15:38:14 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18631 "" "Go-http-client/1.1"
Feb 24 15:38:14 compute-0 podman_exporter[204674]: ts=2026-02-24T15:38:14.251Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Feb 24 15:38:14 compute-0 podman_exporter[204674]: ts=2026-02-24T15:38:14.251Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Feb 24 15:38:14 compute-0 podman_exporter[204674]: ts=2026-02-24T15:38:14.252Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Feb 24 15:38:14 compute-0 python3.9[204871]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:38:15 compute-0 sudo[205021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlbkfubzhzeeilvphiigmhogjunfolhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947495.6194155-902-86471958913225/AnsiballZ_stat.py'
Feb 24 15:38:15 compute-0 sudo[205021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:16 compute-0 python3.9[205024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:38:16 compute-0 sudo[205021]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:16 compute-0 sudo[205147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhdroecfhdtqmvudfzpcsnjyoiezhqxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947495.6194155-902-86471958913225/AnsiballZ_copy.py'
Feb 24 15:38:16 compute-0 sudo[205147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:16 compute-0 python3.9[205150]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947495.6194155-902-86471958913225/.source.yaml _original_basename=.g7ix4kw7 follow=False checksum=3fc25a9e746dd6a19246c00f47479373a6cd8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:16 compute-0 sudo[205147]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:17 compute-0 sudo[205300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djdnfcxntleelynceautgjkkojfxwyzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947497.0297139-917-223814181197307/AnsiballZ_stat.py'
Feb 24 15:38:17 compute-0 sudo[205300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:17 compute-0 python3.9[205303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:38:17 compute-0 sudo[205300]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:17 compute-0 sudo[205424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgdnkppxnstnxzomaapxoavjtcdbgfuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947497.0297139-917-223814181197307/AnsiballZ_copy.py'
Feb 24 15:38:17 compute-0 sudo[205424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:18 compute-0 python3.9[205427]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947497.0297139-917-223814181197307/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:38:18 compute-0 sudo[205424]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:18 compute-0 sudo[205577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsucosvjjhuvyfmauulqocnuyksdphyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947498.6618328-938-121777121828508/AnsiballZ_file.py'
Feb 24 15:38:18 compute-0 sudo[205577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:19 compute-0 python3.9[205580]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:19 compute-0 sudo[205577]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:19 compute-0 sudo[205730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftpmpcahgdorhdfegzexudotutddzyfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947499.3650937-946-2593865714104/AnsiballZ_file.py'
Feb 24 15:38:19 compute-0 sudo[205730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:19 compute-0 python3.9[205733]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:38:19 compute-0 sudo[205730]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:20 compute-0 sudo[205883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jchhfnvlaemsvzhdfslzukpyckoogbin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947500.069924-954-169134414022275/AnsiballZ_stat.py'
Feb 24 15:38:20 compute-0 sudo[205883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:20 compute-0 python3.9[205886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:38:20 compute-0 sudo[205883]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:20 compute-0 sudo[205962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqikixehqdabhfbgemeupbejirniscyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947500.069924-954-169134414022275/AnsiballZ_file.py'
Feb 24 15:38:20 compute-0 sudo[205962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:21 compute-0 python3.9[205965]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.f329_tnj recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:21 compute-0 sudo[205962]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:21 compute-0 python3.9[206115]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:23 compute-0 sudo[206536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzyphvcwdatbmfygjogtilotguusqhma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947503.299135-991-237002558266938/AnsiballZ_container_config_data.py'
Feb 24 15:38:23 compute-0 sudo[206536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:23 compute-0 python3.9[206539]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Feb 24 15:38:23 compute-0 sudo[206536]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:24 compute-0 sudo[206689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iefvvvrwnvsvmuujykajiwxmdbobcuka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947504.2511618-1002-205189272540059/AnsiballZ_container_config_hash.py'
Feb 24 15:38:24 compute-0 sudo[206689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:24 compute-0 python3.9[206692]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:38:24 compute-0 sudo[206689]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:25 compute-0 sudo[206842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxtlcksbpgjjpqadmljwhmcsfcbbezel ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947505.112819-1012-40247894907467/AnsiballZ_edpm_container_manage.py'
Feb 24 15:38:25 compute-0 sudo[206842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:25 compute-0 python3[206845]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:38:27 compute-0 podman[206904]: 2026-02-24 15:38:27.089502324 +0000 UTC m=+0.120671601 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:38:28 compute-0 podman[206860]: 2026-02-24 15:38:28.426297235 +0000 UTC m=+2.707637028 image pull ba17a9079b86bdce2cd9e03cad5d6d4d255cc298efd741b09239c34192e5621b quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Feb 24 15:38:28 compute-0 podman[206979]: 2026-02-24 15:38:28.558933639 +0000 UTC m=+0.058760006 container create e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 24 15:38:28 compute-0 podman[206979]: 2026-02-24 15:38:28.527242673 +0000 UTC m=+0.027069090 image pull ba17a9079b86bdce2cd9e03cad5d6d4d255cc298efd741b09239c34192e5621b quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Feb 24 15:38:28 compute-0 python3[206845]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Feb 24 15:38:28 compute-0 sudo[206842]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:29 compute-0 sudo[207180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tibdzdgpeowfeuhvvywqhogtbbwwdhwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947508.9051597-1020-15941713668113/AnsiballZ_stat.py'
Feb 24 15:38:29 compute-0 sudo[207180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:29 compute-0 podman[207141]: 2026-02-24 15:38:29.23935569 +0000 UTC m=+0.071289805 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 15:38:29 compute-0 python3.9[207189]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:38:29 compute-0 sudo[207180]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:30 compute-0 sudo[207341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogokaymxjpnodbvywhkwntfseefggeql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947509.711096-1029-197018000407807/AnsiballZ_file.py'
Feb 24 15:38:30 compute-0 sudo[207341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:30 compute-0 python3.9[207344]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:30 compute-0 sudo[207341]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:30 compute-0 sudo[207418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmjwdmrfkjatuzwwhthdjhthobfmgchx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947509.711096-1029-197018000407807/AnsiballZ_stat.py'
Feb 24 15:38:30 compute-0 sudo[207418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:30 compute-0 python3.9[207421]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:38:30 compute-0 sudo[207418]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:31 compute-0 sudo[207570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaozzpjmrrfglgjjdgimfmojkrocxbkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947510.779226-1029-178722868902654/AnsiballZ_copy.py'
Feb 24 15:38:31 compute-0 sudo[207570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:31 compute-0 python3.9[207573]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771947510.779226-1029-178722868902654/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:31 compute-0 sudo[207570]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:31 compute-0 sudo[207647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvlkxbttrrzudgujtljnxapzbaibnaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947510.779226-1029-178722868902654/AnsiballZ_systemd.py'
Feb 24 15:38:31 compute-0 sudo[207647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:32 compute-0 python3.9[207650]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:38:32 compute-0 systemd[1]: Reloading.
Feb 24 15:38:32 compute-0 systemd-rc-local-generator[207671]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:38:32 compute-0 systemd-sysv-generator[207676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:38:32 compute-0 sudo[207647]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:32 compute-0 sudo[207765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dljxynxiwzxinupbcjxbzdzvebcfsniz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947510.779226-1029-178722868902654/AnsiballZ_systemd.py'
Feb 24 15:38:32 compute-0 sudo[207765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:33 compute-0 python3.9[207768]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:38:33 compute-0 systemd[1]: Reloading.
Feb 24 15:38:33 compute-0 systemd-rc-local-generator[207792]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:38:33 compute-0 systemd-sysv-generator[207795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:38:33 compute-0 systemd[1]: Starting openstack_network_exporter container...
Feb 24 15:38:33 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d332649fd1b5050c82cae67c3c6ca4d5a81e90763481a52ec561a17feafa3de/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 24 15:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d332649fd1b5050c82cae67c3c6ca4d5a81e90763481a52ec561a17feafa3de/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 24 15:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d332649fd1b5050c82cae67c3c6ca4d5a81e90763481a52ec561a17feafa3de/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Feb 24 15:38:33 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a.
Feb 24 15:38:33 compute-0 podman[207815]: 2026-02-24 15:38:33.550457091 +0000 UTC m=+0.148693374 container init e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, release=1770267347, version=9.7, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:48: registering *bridge.Collector
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:48: registering *coverage.Collector
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:48: registering *datapath.Collector
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:48: registering *iface.Collector
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:48: registering *memory.Collector
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:48: registering *ovn.Collector
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:48: registering *pmd_perf.Collector
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:48: registering *pmd_rxq.Collector
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: INFO    15:38:33 main.go:48: registering *vswitch.Collector
Feb 24 15:38:33 compute-0 openstack_network_exporter[207830]: NOTICE  15:38:33 main.go:76: listening on https://:9105/metrics
Feb 24 15:38:33 compute-0 podman[207815]: 2026-02-24 15:38:33.58024841 +0000 UTC m=+0.178484653 container start e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9/ubi-minimal, release=1770267347, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container)
Feb 24 15:38:33 compute-0 podman[207815]: openstack_network_exporter
Feb 24 15:38:33 compute-0 systemd[1]: Started openstack_network_exporter container.
Feb 24 15:38:33 compute-0 sudo[207765]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:33 compute-0 podman[207840]: 2026-02-24 15:38:33.676460876 +0000 UTC m=+0.086865761 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.7, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1770267347, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9)
Feb 24 15:38:34 compute-0 python3.9[208014]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:38:35 compute-0 sudo[208164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrbuzqkzfrumgfgpvklrzkgtrnlqxftn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947514.9673831-1074-83014233391834/AnsiballZ_stat.py'
Feb 24 15:38:35 compute-0 sudo[208164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:35 compute-0 python3.9[208167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:38:35 compute-0 sudo[208164]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:36 compute-0 sudo[208290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvmhehfawtmwdlodlddxjqpwjeumlgyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947514.9673831-1074-83014233391834/AnsiballZ_copy.py'
Feb 24 15:38:36 compute-0 sudo[208290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:36 compute-0 python3.9[208293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947514.9673831-1074-83014233391834/.source.yaml _original_basename=._fmko12_ follow=False checksum=29acfe09820ddd6a4bc11b76a5aac952820d39ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:36 compute-0 sudo[208290]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:36 compute-0 sudo[208443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjeuxbnrtfneoothnuidfbecexrvymts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947516.3527503-1089-240633068116991/AnsiballZ_find.py'
Feb 24 15:38:36 compute-0 sudo[208443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:36 compute-0 python3.9[208446]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 24 15:38:36 compute-0 sudo[208443]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:37 compute-0 sudo[208596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucilzkkuhokfwkwdxtxxrsalffarxmvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947517.288974-1099-245471833114557/AnsiballZ_podman_container_info.py'
Feb 24 15:38:37 compute-0 sudo[208596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:37 compute-0 python3.9[208599]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Feb 24 15:38:38 compute-0 sudo[208596]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.300 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.302 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.330 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.331 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.332 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.362 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.362 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.363 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.363 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.528 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.529 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5875MB free_disk=72.24568557739258GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.529 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.530 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.602 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.602 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.619 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.635 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.637 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:38:38 compute-0 nova_compute[188703]: 2026-02-24 15:38:38.637 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:38:38 compute-0 sudo[208778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riuwkprbwzwviccffcpxxaalxwvncnuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947518.2919738-1107-263293671308867/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:38 compute-0 sudo[208778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:38 compute-0 podman[208737]: 2026-02-24 15:38:38.822793879 +0000 UTC m=+0.093363227 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 24 15:38:38 compute-0 python3.9[208786]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:39 compute-0 systemd[1]: Started libpod-conmon-6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52.scope.
Feb 24 15:38:39 compute-0 podman[208794]: 2026-02-24 15:38:39.100539175 +0000 UTC m=+0.090526184 container exec 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller)
Feb 24 15:38:39 compute-0 podman[208794]: 2026-02-24 15:38:39.136789137 +0000 UTC m=+0.126776056 container exec_died 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:38:39 compute-0 systemd[1]: libpod-conmon-6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52.scope: Deactivated successfully.
Feb 24 15:38:39 compute-0 sudo[208778]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:39 compute-0 nova_compute[188703]: 2026-02-24 15:38:39.249 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:39 compute-0 nova_compute[188703]: 2026-02-24 15:38:39.250 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:38:39 compute-0 nova_compute[188703]: 2026-02-24 15:38:39.250 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:38:39 compute-0 nova_compute[188703]: 2026-02-24 15:38:39.264 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 15:38:39 compute-0 podman[208824]: 2026-02-24 15:38:39.296362707 +0000 UTC m=+0.080622421 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 24 15:38:39 compute-0 systemd[1]: 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda-67e2ff5b7ab785f1.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:38:39 compute-0 systemd[1]: 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda-67e2ff5b7ab785f1.service: Failed with result 'exit-code'.
Feb 24 15:38:39 compute-0 sudo[208994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjyidmsfkeqmvgektxaccgkgvuavuawz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947519.385726-1115-22125778178297/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:39 compute-0 sudo[208994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:39 compute-0 python3.9[208997]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:39 compute-0 nova_compute[188703]: 2026-02-24 15:38:39.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:39 compute-0 nova_compute[188703]: 2026-02-24 15:38:39.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:39 compute-0 nova_compute[188703]: 2026-02-24 15:38:39.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:39 compute-0 nova_compute[188703]: 2026-02-24 15:38:39.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:38:39 compute-0 nova_compute[188703]: 2026-02-24 15:38:39.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:38:40 compute-0 systemd[1]: Started libpod-conmon-6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52.scope.
Feb 24 15:38:40 compute-0 podman[208998]: 2026-02-24 15:38:40.038672082 +0000 UTC m=+0.084468199 container exec 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 15:38:40 compute-0 podman[208998]: 2026-02-24 15:38:40.069375224 +0000 UTC m=+0.115171311 container exec_died 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Feb 24 15:38:40 compute-0 systemd[1]: libpod-conmon-6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52.scope: Deactivated successfully.
Feb 24 15:38:40 compute-0 sudo[208994]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:40 compute-0 sudo[209179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewzqpkyflzmshfamwqmbmkmkmhvzytpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947520.3577797-1123-133564698776508/AnsiballZ_file.py'
Feb 24 15:38:40 compute-0 sudo[209179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:40 compute-0 python3.9[209182]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:40 compute-0 sudo[209179]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:41 compute-0 sudo[209332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbfrjyzptldvkeeqygjgnoivxlseehdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947521.1247325-1132-254321095552622/AnsiballZ_podman_container_info.py'
Feb 24 15:38:41 compute-0 sudo[209332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:41 compute-0 python3.9[209335]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Feb 24 15:38:41 compute-0 sudo[209332]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:42 compute-0 sudo[209499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qegipcfxgczaqzdldnngsluinmoavbkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947521.8533368-1140-61784374131397/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:42 compute-0 sudo[209499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:42 compute-0 python3.9[209502]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:42 compute-0 systemd[1]: Started libpod-conmon-e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9.scope.
Feb 24 15:38:42 compute-0 podman[209503]: 2026-02-24 15:38:42.473435747 +0000 UTC m=+0.112068062 container exec e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Feb 24 15:38:42 compute-0 podman[209503]: 2026-02-24 15:38:42.510552282 +0000 UTC m=+0.149184607 container exec_died e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:38:42 compute-0 systemd[1]: libpod-conmon-e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9.scope: Deactivated successfully.
Feb 24 15:38:42 compute-0 sudo[209499]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:43 compute-0 sudo[209685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjvvlkfvgtyfyvdttwtftssowldopyqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947522.738219-1148-128188800346921/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:43 compute-0 sudo[209685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:43 compute-0 python3.9[209688]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:43 compute-0 systemd[1]: Started libpod-conmon-e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9.scope.
Feb 24 15:38:43 compute-0 podman[209689]: 2026-02-24 15:38:43.341042331 +0000 UTC m=+0.087469707 container exec e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 15:38:43 compute-0 podman[209689]: 2026-02-24 15:38:43.376470242 +0000 UTC m=+0.122897618 container exec_died e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 24 15:38:43 compute-0 systemd[1]: libpod-conmon-e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9.scope: Deactivated successfully.
Feb 24 15:38:43 compute-0 sudo[209685]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:43 compute-0 sudo[209871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqpelvkshfsjedfybtqjfdpxkjzexovp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947523.596566-1156-165596038213999/AnsiballZ_file.py'
Feb 24 15:38:43 compute-0 sudo[209871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:44 compute-0 python3.9[209874]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:44 compute-0 sudo[209871]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:44 compute-0 sudo[210034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vexdwnvdlgcjarlnbigqjgqizaxzmbfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947524.3418784-1165-126978433463478/AnsiballZ_podman_container_info.py'
Feb 24 15:38:44 compute-0 sudo[210034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:44 compute-0 podman[209998]: 2026-02-24 15:38:44.764372903 +0000 UTC m=+0.059009673 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:38:44 compute-0 python3.9[210048]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Feb 24 15:38:45 compute-0 sudo[210034]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:45 compute-0 sudo[210211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgamcvwmqffejtncvlvdbtlrxtutwif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947525.3079498-1173-71910103437545/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:45 compute-0 sudo[210211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:45 compute-0 python3.9[210214]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:45 compute-0 systemd[1]: Started libpod-conmon-2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda.scope.
Feb 24 15:38:45 compute-0 podman[210215]: 2026-02-24 15:38:45.95838662 +0000 UTC m=+0.103327159 container exec 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 15:38:45 compute-0 podman[210215]: 2026-02-24 15:38:45.993653587 +0000 UTC m=+0.138594046 container exec_died 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_id=ceilometer_agent_compute)
Feb 24 15:38:46 compute-0 systemd[1]: libpod-conmon-2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda.scope: Deactivated successfully.
Feb 24 15:38:46 compute-0 sudo[210211]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:46 compute-0 sudo[210396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceyvgtaoxxfrcknxprgjohmewitphezd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947526.248969-1181-110797757429429/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:46 compute-0 sudo[210396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:46 compute-0 python3.9[210399]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:46 compute-0 systemd[1]: Started libpod-conmon-2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda.scope.
Feb 24 15:38:46 compute-0 podman[210400]: 2026-02-24 15:38:46.872727983 +0000 UTC m=+0.088177805 container exec 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 15:38:46 compute-0 podman[210400]: 2026-02-24 15:38:46.907511318 +0000 UTC m=+0.122961140 container exec_died 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 24 15:38:46 compute-0 systemd[1]: libpod-conmon-2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda.scope: Deactivated successfully.
Feb 24 15:38:46 compute-0 sudo[210396]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:47 compute-0 sudo[210582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inffichaavvmbdbnidaiwlgwathxpdxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947527.186684-1189-10628192457855/AnsiballZ_file.py'
Feb 24 15:38:47 compute-0 sudo[210582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:47 compute-0 python3.9[210585]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:47 compute-0 sudo[210582]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:48 compute-0 sudo[210735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vufnhkpeqtmyytnkociowlemedbgxuck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947527.9313085-1198-280960639547036/AnsiballZ_podman_container_info.py'
Feb 24 15:38:48 compute-0 sudo[210735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:48 compute-0 python3.9[210738]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Feb 24 15:38:48 compute-0 sudo[210735]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:49 compute-0 sudo[210901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkwwhtkabxzomrhbmxpjfekwsfensbtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947528.7286859-1206-90480805669102/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:49 compute-0 sudo[210901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:49 compute-0 python3.9[210904]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:49 compute-0 systemd[1]: Started libpod-conmon-0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a.scope.
Feb 24 15:38:49 compute-0 podman[210905]: 2026-02-24 15:38:49.525252757 +0000 UTC m=+0.088907561 container exec 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:38:49 compute-0 podman[210905]: 2026-02-24 15:38:49.556177425 +0000 UTC m=+0.119832249 container exec_died 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:38:49 compute-0 systemd[1]: libpod-conmon-0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a.scope: Deactivated successfully.
Feb 24 15:38:49 compute-0 sudo[210901]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:50 compute-0 sudo[211084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzsvscfmuxifsonpqtmckakrghlyuzxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947529.8313572-1214-210302797559690/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:50 compute-0 sudo[211084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:50 compute-0 python3.9[211087]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:50 compute-0 systemd[1]: Started libpod-conmon-0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a.scope.
Feb 24 15:38:50 compute-0 podman[211088]: 2026-02-24 15:38:50.398735143 +0000 UTC m=+0.081760172 container exec 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:38:50 compute-0 podman[211088]: 2026-02-24 15:38:50.434677781 +0000 UTC m=+0.117702760 container exec_died 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:38:50 compute-0 systemd[1]: libpod-conmon-0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a.scope: Deactivated successfully.
Feb 24 15:38:50 compute-0 sudo[211084]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:50 compute-0 sudo[211269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpnglisvdphyhsuzhmszchifjxvkkhen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947530.6823034-1222-98176552680815/AnsiballZ_file.py'
Feb 24 15:38:50 compute-0 sudo[211269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:51 compute-0 python3.9[211272]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:51 compute-0 sudo[211269]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:51 compute-0 sudo[211422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpiibvxmumnnjawdbzznfytvdqekgpet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947531.4078832-1231-109036064031559/AnsiballZ_podman_container_info.py'
Feb 24 15:38:51 compute-0 sudo[211422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:51 compute-0 python3.9[211425]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Feb 24 15:38:51 compute-0 sudo[211422]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:52 compute-0 sudo[211589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soffvytznwxdctymchacfxylkmccnawn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947532.1954584-1239-46963071863129/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:52 compute-0 sudo[211589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:52 compute-0 python3.9[211592]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:52 compute-0 systemd[1]: Started libpod-conmon-4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9.scope.
Feb 24 15:38:52 compute-0 podman[211593]: 2026-02-24 15:38:52.802921677 +0000 UTC m=+0.088681405 container exec 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:38:52 compute-0 podman[211593]: 2026-02-24 15:38:52.838469784 +0000 UTC m=+0.124229412 container exec_died 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:38:52 compute-0 systemd[1]: libpod-conmon-4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9.scope: Deactivated successfully.
Feb 24 15:38:52 compute-0 sudo[211589]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:53 compute-0 sudo[211773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epqgbfnazxkjfogosnqwbwtbiulhxkwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947533.0681324-1247-220436229355523/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:53 compute-0 sudo[211773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:53 compute-0 python3.9[211776]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:53 compute-0 systemd[1]: Started libpod-conmon-4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9.scope.
Feb 24 15:38:53 compute-0 podman[211777]: 2026-02-24 15:38:53.813904552 +0000 UTC m=+0.089359843 container exec 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:38:53 compute-0 podman[211777]: 2026-02-24 15:38:53.84878327 +0000 UTC m=+0.124238501 container exec_died 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 15:38:53 compute-0 systemd[1]: libpod-conmon-4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9.scope: Deactivated successfully.
Feb 24 15:38:53 compute-0 sudo[211773]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:54 compute-0 sudo[211958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iorrshvmohvmnnjjmczfwqvfmimtzuti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947534.1286104-1255-190429564649968/AnsiballZ_file.py'
Feb 24 15:38:54 compute-0 sudo[211958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:54 compute-0 python3.9[211961]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:54 compute-0 sudo[211958]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:55 compute-0 sudo[212111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgofdftzjhgkugnsankgnmpuygaxjiqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947534.9184-1264-146694694334595/AnsiballZ_podman_container_info.py'
Feb 24 15:38:55 compute-0 sudo[212111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:55 compute-0 python3.9[212114]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Feb 24 15:38:55 compute-0 sudo[212111]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:38:55.692 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:38:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:38:55.693 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:38:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:38:55.693 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:38:55 compute-0 sudo[212277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxpsosamdalouqhtotgbsvdnpfyhjlpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947535.6555495-1272-144685748502746/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:55 compute-0 sudo[212277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:56 compute-0 python3.9[212280]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:56 compute-0 systemd[1]: Started libpod-conmon-e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a.scope.
Feb 24 15:38:56 compute-0 podman[212281]: 2026-02-24 15:38:56.978601474 +0000 UTC m=+0.079280343 container exec e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 24 15:38:57 compute-0 podman[212281]: 2026-02-24 15:38:57.008545815 +0000 UTC m=+0.109224674 container exec_died e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., release=1770267347, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, version=9.7, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 15:38:57 compute-0 systemd[1]: libpod-conmon-e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a.scope: Deactivated successfully.
Feb 24 15:38:57 compute-0 sudo[212277]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:57 compute-0 sudo[212478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwxhdassaharolanoelnbtcxpufgvrxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947537.3558676-1280-194652310588106/AnsiballZ_podman_container_exec.py'
Feb 24 15:38:57 compute-0 sudo[212478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:57 compute-0 podman[212437]: 2026-02-24 15:38:57.683913861 +0000 UTC m=+0.059464243 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:38:57 compute-0 python3.9[212490]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:38:57 compute-0 systemd[1]: Started libpod-conmon-e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a.scope.
Feb 24 15:38:57 compute-0 podman[212491]: 2026-02-24 15:38:57.981829414 +0000 UTC m=+0.087267525 container exec e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, release=1770267347)
Feb 24 15:38:58 compute-0 podman[212491]: 2026-02-24 15:38:58.016561818 +0000 UTC m=+0.121999919 container exec_died e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, managed_by=edpm_ansible, version=9.7, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, release=1770267347, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 24 15:38:58 compute-0 systemd[1]: libpod-conmon-e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a.scope: Deactivated successfully.
Feb 24 15:38:58 compute-0 sudo[212478]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:58 compute-0 sudo[212671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxvnbcdjjqmxyoftlucwoifdszqrpkuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947538.2638934-1288-107370220998604/AnsiballZ_file.py'
Feb 24 15:38:58 compute-0 sudo[212671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:58 compute-0 python3.9[212674]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:58 compute-0 sudo[212671]: pam_unix(sudo:session): session closed for user root
Feb 24 15:38:59 compute-0 sudo[212834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkamjqljnqxoypqkzzcprybnqknblrjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947539.0188255-1297-141224140919897/AnsiballZ_file.py'
Feb 24 15:38:59 compute-0 sudo[212834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:38:59 compute-0 podman[212798]: 2026-02-24 15:38:59.370665012 +0000 UTC m=+0.074508221 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 15:38:59 compute-0 python3.9[212845]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:38:59 compute-0 sudo[212834]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:00 compute-0 sudo[212997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emlecuwgarustnbeiivyxjfiggyeztkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947539.7506127-1305-61210594997472/AnsiballZ_stat.py'
Feb 24 15:39:00 compute-0 sudo[212997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:00 compute-0 python3.9[213000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:00 compute-0 sudo[212997]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:00 compute-0 sudo[213121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqqjwuvijkjwxbjacmnlnsdrzvsdounh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947539.7506127-1305-61210594997472/AnsiballZ_copy.py'
Feb 24 15:39:00 compute-0 sudo[213121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:00 compute-0 python3.9[213124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947539.7506127-1305-61210594997472/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:00 compute-0 sudo[213121]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:01 compute-0 sudo[213274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbtapemezxjbwkxfzgvsmqocrxctfted ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947541.1364644-1321-103419557116290/AnsiballZ_file.py'
Feb 24 15:39:01 compute-0 sudo[213274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:01 compute-0 python3.9[213277]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:01 compute-0 sudo[213274]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:02 compute-0 sudo[213427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tugvcusdvaaibnihfhfpljkuoslxecqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947541.956992-1329-143547311636319/AnsiballZ_stat.py'
Feb 24 15:39:02 compute-0 sudo[213427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:02 compute-0 python3.9[213431]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:02 compute-0 sudo[213427]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:03 compute-0 sudo[213508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcychfuqhduoyquppgevknvqoqyzhhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947541.956992-1329-143547311636319/AnsiballZ_file.py'
Feb 24 15:39:03 compute-0 sudo[213508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:03 compute-0 python3.9[213511]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:03 compute-0 sudo[213508]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:03 compute-0 sudo[213661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enxwlcojcjurvzvwjhvlxpmtzxbipszh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947543.4604032-1341-177167335825175/AnsiballZ_stat.py'
Feb 24 15:39:03 compute-0 sudo[213661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:03 compute-0 podman[213663]: 2026-02-24 15:39:03.841553637 +0000 UTC m=+0.071861097 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc.)
Feb 24 15:39:03 compute-0 python3.9[213665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:03 compute-0 sudo[213661]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:04 compute-0 sudo[213761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkjoxaysroqkzbeqfdmvpkutznbblcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947543.4604032-1341-177167335825175/AnsiballZ_file.py'
Feb 24 15:39:04 compute-0 sudo[213761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:04 compute-0 python3.9[213764]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.t1a34mfd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:04 compute-0 sudo[213761]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:04 compute-0 sudo[213914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivfrnqnmmoilihsgrqjieruhbarurkdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947544.6384847-1353-258017477242560/AnsiballZ_stat.py'
Feb 24 15:39:05 compute-0 sudo[213914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:05 compute-0 python3.9[213917]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:05 compute-0 sudo[213914]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:05 compute-0 sudo[213993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiuicyekqwxgotpzumsjnaeigyprpkjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947544.6384847-1353-258017477242560/AnsiballZ_file.py'
Feb 24 15:39:05 compute-0 sudo[213993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:05 compute-0 python3.9[213996]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:05 compute-0 sudo[213993]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:06 compute-0 sudo[214146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsqeoxnzoclxzcrqfvuqhzxtuvbsomwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947546.037947-1366-37967838403048/AnsiballZ_command.py'
Feb 24 15:39:06 compute-0 sudo[214146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:06 compute-0 python3.9[214149]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:39:06 compute-0 sudo[214146]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:07 compute-0 sudo[214300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjyegnfyvcmkloofsmupkptensyjnqhl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947546.7054496-1374-262441455949027/AnsiballZ_edpm_nftables_from_files.py'
Feb 24 15:39:07 compute-0 sudo[214300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:07 compute-0 python3[214303]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 24 15:39:07 compute-0 sudo[214300]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:07 compute-0 sudo[214453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlsfveydqczrheiitspkrmsubxxhtnet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947547.6204224-1382-27876125893556/AnsiballZ_stat.py'
Feb 24 15:39:07 compute-0 sudo[214453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:08 compute-0 python3.9[214456]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:08 compute-0 sudo[214453]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:08 compute-0 sudo[214532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmepmezjdzsbaicsizrgyrsqkezxgzwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947547.6204224-1382-27876125893556/AnsiballZ_file.py'
Feb 24 15:39:08 compute-0 sudo[214532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:08 compute-0 python3.9[214535]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:08 compute-0 sudo[214532]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:09 compute-0 podman[214612]: 2026-02-24 15:39:09.127424094 +0000 UTC m=+0.093283273 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:39:09 compute-0 sudo[214711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geuqxfljnqfdkdycidzaarqkvafttakj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947548.891342-1394-64333831364308/AnsiballZ_stat.py'
Feb 24 15:39:09 compute-0 sudo[214711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:09 compute-0 python3.9[214714]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:09 compute-0 sudo[214711]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:09 compute-0 podman[214717]: 2026-02-24 15:39:09.610030456 +0000 UTC m=+0.077265157 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 15:39:09 compute-0 sudo[214810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmnmzhncrfxvqxhgljgobpfseojbbyni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947548.891342-1394-64333831364308/AnsiballZ_file.py'
Feb 24 15:39:09 compute-0 sudo[214810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:10 compute-0 python3.9[214813]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:10 compute-0 sudo[214810]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:10 compute-0 sudo[214963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szikivahvhfiqihqzvvmxiradzuqzizy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947550.193089-1406-170991981302286/AnsiballZ_stat.py'
Feb 24 15:39:10 compute-0 sudo[214963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:10 compute-0 python3.9[214966]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:10 compute-0 sudo[214963]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:11 compute-0 sudo[215042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maxyhqfrlxlzalctvbufiqgmrposncmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947550.193089-1406-170991981302286/AnsiballZ_file.py'
Feb 24 15:39:11 compute-0 sudo[215042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:11 compute-0 python3.9[215045]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:11 compute-0 sudo[215042]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:11 compute-0 sudo[215195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-astsftbrjqogcradvbqvoswjrdwtmrft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947551.4978137-1418-117943260351817/AnsiballZ_stat.py'
Feb 24 15:39:11 compute-0 sudo[215195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:12 compute-0 python3.9[215198]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:12 compute-0 sudo[215195]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:12 compute-0 sudo[215274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkpbbqimsdncrxgpjedjhjpbythcxbnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947551.4978137-1418-117943260351817/AnsiballZ_file.py'
Feb 24 15:39:12 compute-0 sudo[215274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:12 compute-0 python3.9[215277]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:12 compute-0 sudo[215274]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:13 compute-0 sudo[215427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftpwbwdawsbyplhjparltopordqfbkbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947552.7646863-1430-242422354023588/AnsiballZ_stat.py'
Feb 24 15:39:13 compute-0 sudo[215427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:13 compute-0 python3.9[215430]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:13 compute-0 sudo[215427]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:13 compute-0 sudo[215553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxdxlqoqmepzxlbbojttbjtocdjeyyyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947552.7646863-1430-242422354023588/AnsiballZ_copy.py'
Feb 24 15:39:13 compute-0 sudo[215553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:14 compute-0 python3.9[215556]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947552.7646863-1430-242422354023588/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:14 compute-0 sudo[215553]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:14 compute-0 sudo[215706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ampyyrwlzthdomuulhdslhmsvvkiitap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947554.2059581-1445-195609097413825/AnsiballZ_file.py'
Feb 24 15:39:14 compute-0 sudo[215706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:14 compute-0 python3.9[215709]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:14 compute-0 sudo[215706]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:15 compute-0 podman[215793]: 2026-02-24 15:39:15.11293027 +0000 UTC m=+0.073357309 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 15:39:15 compute-0 sudo[215883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjhpsmjqbfliormnytohtlkqopoirusn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947554.9283128-1453-126301593291157/AnsiballZ_command.py'
Feb 24 15:39:15 compute-0 sudo[215883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:15 compute-0 python3.9[215886]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:39:15 compute-0 sudo[215883]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:16 compute-0 sudo[216039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzgskwsaehvytgjhmcwqyekbjxjspeet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947555.6713097-1461-168965731220380/AnsiballZ_blockinfile.py'
Feb 24 15:39:16 compute-0 sudo[216039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:16 compute-0 python3.9[216042]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:16 compute-0 sudo[216039]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:16 compute-0 sudo[216192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqzvbageahcjxrvoqiesaoauknboptuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947556.676134-1470-56440154015301/AnsiballZ_command.py'
Feb 24 15:39:16 compute-0 sudo[216192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:17 compute-0 python3.9[216195]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:39:17 compute-0 sudo[216192]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:17 compute-0 sudo[216346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdgfxvgpkpdiyezpdhktavurnyhckuda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947557.4081326-1478-272862724565445/AnsiballZ_stat.py'
Feb 24 15:39:17 compute-0 sudo[216346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:17 compute-0 python3.9[216349]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:39:17 compute-0 sudo[216346]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:18 compute-0 sudo[216501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpcosedusgslrwzmlkbzwaekpznknfcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947558.1213658-1486-265450979875387/AnsiballZ_command.py'
Feb 24 15:39:18 compute-0 sudo[216501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:18 compute-0 python3.9[216504]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:39:18 compute-0 sudo[216501]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:19 compute-0 sudo[216657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhnudebafuldoglhyljqardiknlkllds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947558.9776351-1494-185391951740078/AnsiballZ_file.py'
Feb 24 15:39:19 compute-0 sudo[216657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:19 compute-0 python3.9[216660]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:19 compute-0 sudo[216657]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:19 compute-0 sshd-session[189029]: Connection closed by 192.168.122.30 port 56628
Feb 24 15:39:19 compute-0 sshd-session[189026]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:39:19 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Feb 24 15:39:19 compute-0 systemd[1]: session-25.scope: Consumed 1min 49.485s CPU time.
Feb 24 15:39:19 compute-0 systemd-logind[813]: Session 25 logged out. Waiting for processes to exit.
Feb 24 15:39:19 compute-0 systemd-logind[813]: Removed session 25.
Feb 24 15:39:25 compute-0 sshd-session[216688]: Accepted publickey for zuul from 192.168.122.30 port 57566 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:39:25 compute-0 systemd-logind[813]: New session 26 of user zuul.
Feb 24 15:39:25 compute-0 systemd[1]: Started Session 26 of User zuul.
Feb 24 15:39:25 compute-0 sshd-session[216688]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:39:26 compute-0 sudo[216841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywnyrascrbzfrtpnderqhqhiofkvulys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947566.1350465-19-145471151139238/AnsiballZ_systemd_service.py'
Feb 24 15:39:26 compute-0 sudo[216841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:27 compute-0 python3.9[216844]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:39:27 compute-0 systemd[1]: Reloading.
Feb 24 15:39:27 compute-0 systemd-sysv-generator[216870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:39:27 compute-0 systemd-rc-local-generator[216867]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:39:27 compute-0 sudo[216841]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:28 compute-0 podman[217010]: 2026-02-24 15:39:28.099990577 +0000 UTC m=+0.071552057 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:39:28 compute-0 python3.9[217047]: ansible-ansible.builtin.service_facts Invoked
Feb 24 15:39:28 compute-0 network[217077]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 24 15:39:28 compute-0 network[217078]: 'network-scripts' will be removed from distribution in near future.
Feb 24 15:39:28 compute-0 network[217079]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 24 15:39:29 compute-0 podman[217117]: 2026-02-24 15:39:29.492202249 +0000 UTC m=+0.060910803 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 24 15:39:29 compute-0 podman[204685]: time="2026-02-24T15:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:39:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21987 "" "Go-http-client/1.1"
Feb 24 15:39:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Feb 24 15:39:31 compute-0 openstack_network_exporter[207830]: ERROR   15:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:39:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:39:31 compute-0 openstack_network_exporter[207830]: ERROR   15:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:39:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:39:31 compute-0 sudo[217379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpyymlisrhhymuqrtdsuribqahbagevj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947571.326016-42-164569039890161/AnsiballZ_systemd_service.py'
Feb 24 15:39:31 compute-0 sudo[217379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:31 compute-0 python3.9[217382]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:39:32 compute-0 sudo[217379]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:32 compute-0 sudo[217533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmehhmkkoelhaoggjghgwziwupntbmue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947572.4065397-52-245472880198735/AnsiballZ_file.py'
Feb 24 15:39:33 compute-0 sudo[217533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:33 compute-0 python3.9[217536]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:33 compute-0 sudo[217533]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:33 compute-0 sudo[217686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcmsrvjnhllzclqvmaoorvfutahurjsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947573.4577157-60-276137456759965/AnsiballZ_file.py'
Feb 24 15:39:33 compute-0 sudo[217686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:33 compute-0 python3.9[217689]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:33 compute-0 sudo[217686]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:34 compute-0 podman[217690]: 2026-02-24 15:39:34.108290427 +0000 UTC m=+0.070960662 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter)
Feb 24 15:39:34 compute-0 sudo[217861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwfjavrvvsxmthacsvulquomzhtaudhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947574.2636063-69-120863145784517/AnsiballZ_command.py'
Feb 24 15:39:34 compute-0 sudo[217861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:34 compute-0 python3.9[217864]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:39:34 compute-0 sudo[217861]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:35 compute-0 python3.9[218016]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 24 15:39:36 compute-0 sudo[218166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrfsyejtkfqqhnrdgqxmhfeqylykwqtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947576.1411242-87-6361582047675/AnsiballZ_systemd_service.py'
Feb 24 15:39:36 compute-0 sudo[218166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:36 compute-0 python3.9[218169]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:39:36 compute-0 systemd[1]: Reloading.
Feb 24 15:39:36 compute-0 systemd-rc-local-generator[218187]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:39:36 compute-0 systemd-sysv-generator[218193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:39:37 compute-0 sudo[218166]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:37 compute-0 sudo[218361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjmeorobrxbskhozrhfljmgxyzanbbud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947577.1714213-95-187039665205944/AnsiballZ_command.py'
Feb 24 15:39:37 compute-0 sudo[218361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:37 compute-0 python3.9[218364]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:39:37 compute-0 sudo[218361]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:38 compute-0 sudo[218515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwcvilempkodcnynyodmuyhnmxsclgiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947577.988284-104-187855673072269/AnsiballZ_file.py'
Feb 24 15:39:38 compute-0 sudo[218515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:38 compute-0 python3.9[218518]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:39:38 compute-0 sudo[218515]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:38 compute-0 nova_compute[188703]: 2026-02-24 15:39:38.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:39:38 compute-0 nova_compute[188703]: 2026-02-24 15:39:38.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:39:38 compute-0 nova_compute[188703]: 2026-02-24 15:39:38.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:39:38 compute-0 nova_compute[188703]: 2026-02-24 15:39:38.964 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 15:39:38 compute-0 nova_compute[188703]: 2026-02-24 15:39:38.964 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:39:38 compute-0 nova_compute[188703]: 2026-02-24 15:39:38.965 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:39:39 compute-0 podman[218642]: 2026-02-24 15:39:39.287709808 +0000 UTC m=+0.113590945 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 24 15:39:39 compute-0 python3.9[218678]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.825 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.828 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.833 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.834 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.835 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.837 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.838 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.839 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fd8f1f10>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:39:39.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:39:39 compute-0 nova_compute[188703]: 2026-02-24 15:39:39.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:39:39 compute-0 nova_compute[188703]: 2026-02-24 15:39:39.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:39:39 compute-0 nova_compute[188703]: 2026-02-24 15:39:39.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:39:39 compute-0 nova_compute[188703]: 2026-02-24 15:39:39.974 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:39:39 compute-0 nova_compute[188703]: 2026-02-24 15:39:39.976 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:39:39 compute-0 nova_compute[188703]: 2026-02-24 15:39:39.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:39:39 compute-0 nova_compute[188703]: 2026-02-24 15:39:39.977 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:39:40 compute-0 podman[218823]: 2026-02-24 15:39:40.008351609 +0000 UTC m=+0.118514811 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 24 15:39:40 compute-0 python3.9[218862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.148 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.150 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5841MB free_disk=72.29689407348633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.150 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.150 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.225 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.225 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.248 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.266 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.268 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:39:40 compute-0 nova_compute[188703]: 2026-02-24 15:39:40.268 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:39:40 compute-0 python3.9[218990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947579.6247957-120-205470852007516/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:39:41 compute-0 nova_compute[188703]: 2026-02-24 15:39:41.265 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:39:41 compute-0 python3.9[219140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:41 compute-0 nova_compute[188703]: 2026-02-24 15:39:41.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:39:41 compute-0 nova_compute[188703]: 2026-02-24 15:39:41.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:39:41 compute-0 nova_compute[188703]: 2026-02-24 15:39:41.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:39:42 compute-0 python3.9[219261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947581.0899322-135-220919098185062/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:39:42 compute-0 sudo[219411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orskxhmrgzfpxnvkydggujlvsusbkjxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947582.4946783-153-126500258778094/AnsiballZ_getent.py'
Feb 24 15:39:42 compute-0 sudo[219411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:43 compute-0 python3.9[219414]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Feb 24 15:39:43 compute-0 sudo[219411]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:44 compute-0 python3.9[219565]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:45 compute-0 python3.9[219686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771947584.0650878-181-108612825406490/.source.conf _original_basename=ceilometer.conf follow=False checksum=06bb8599d9c8a601385c703338dd9ca518a4891f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:45 compute-0 podman[219810]: 2026-02-24 15:39:45.491784354 +0000 UTC m=+0.050743631 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 15:39:45 compute-0 python3.9[219848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:46 compute-0 python3.9[219980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771947585.1954288-181-246019446300074/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:46 compute-0 python3.9[220130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:47 compute-0 python3.9[220251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771947586.3722768-181-931331040278/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:48 compute-0 python3.9[220401]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:39:48 compute-0 python3.9[220553]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:39:49 compute-0 python3.9[220705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:49 compute-0 python3.9[220826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947588.9396582-240-92476838366090/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:50 compute-0 sudo[220976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvsgzhmjcilowfgyqjttsmieqzpsolkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947590.2138567-255-224392507897382/AnsiballZ_file.py'
Feb 24 15:39:50 compute-0 sudo[220976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:50 compute-0 python3.9[220979]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:50 compute-0 sudo[220976]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:51 compute-0 sudo[221129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acplmfovdwhhpgdgyrgjuhvaovnhljdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947590.9027808-263-155885303361042/AnsiballZ_file.py'
Feb 24 15:39:51 compute-0 sudo[221129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:51 compute-0 python3.9[221132]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:51 compute-0 sudo[221129]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:51 compute-0 sudo[221282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afhmhtaxgukksnfzgyidtqlwuxexhvwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947591.5951424-271-176225843813943/AnsiballZ_file.py'
Feb 24 15:39:51 compute-0 sudo[221282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:52 compute-0 python3.9[221285]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:39:52 compute-0 sudo[221282]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:52 compute-0 sudo[221435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwxwlhxneluftpqkeveipuerlsvfkdid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947592.3162935-279-160004081635627/AnsiballZ_stat.py'
Feb 24 15:39:52 compute-0 sudo[221435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:52 compute-0 python3.9[221438]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:52 compute-0 sudo[221435]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:53 compute-0 sudo[221559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvegmwkapcxdkvibllkhjjyrjdxqupll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947592.3162935-279-160004081635627/AnsiballZ_copy.py'
Feb 24 15:39:53 compute-0 sudo[221559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:53 compute-0 python3.9[221562]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947592.3162935-279-160004081635627/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:39:53 compute-0 sudo[221559]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:53 compute-0 sudo[221636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufruikfjsxllklqvwvahnrgffafhqxzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947592.3162935-279-160004081635627/AnsiballZ_stat.py'
Feb 24 15:39:53 compute-0 sudo[221636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:53 compute-0 python3.9[221639]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:53 compute-0 sudo[221636]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:54 compute-0 sudo[221760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhpyzbpygilvqmcozhxebkdprjdbdsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947592.3162935-279-160004081635627/AnsiballZ_copy.py'
Feb 24 15:39:54 compute-0 sudo[221760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:54 compute-0 python3.9[221763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947592.3162935-279-160004081635627/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:39:54 compute-0 sudo[221760]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:55 compute-0 sudo[221913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alhfnfbfzxwvatmzyhuhpcpeqajgxdpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947594.7362483-279-255854022568723/AnsiballZ_stat.py'
Feb 24 15:39:55 compute-0 sudo[221913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:55 compute-0 python3.9[221916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:55 compute-0 sudo[221913]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:39:55.693 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:39:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:39:55.693 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:39:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:39:55.694 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:39:55 compute-0 sudo[222037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acszymtvghkupapcsvnxldftckyomikv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947594.7362483-279-255854022568723/AnsiballZ_copy.py'
Feb 24 15:39:55 compute-0 sudo[222037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:56 compute-0 python3.9[222040]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771947594.7362483-279-255854022568723/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:39:56 compute-0 sudo[222037]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:56 compute-0 sudo[222190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjwcebosqwusvlftfpdsuycvzveitusx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947596.514233-321-207929822971815/AnsiballZ_file.py'
Feb 24 15:39:56 compute-0 sudo[222190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:56 compute-0 python3.9[222193]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:57 compute-0 sudo[222190]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:58 compute-0 sudo[222356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfyotrmxhgxzyxxyroyolhnymqbffslf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947598.162835-329-218350950407969/AnsiballZ_file.py'
Feb 24 15:39:58 compute-0 sudo[222356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:58 compute-0 podman[222317]: 2026-02-24 15:39:58.462869271 +0000 UTC m=+0.064123587 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:39:58 compute-0 python3.9[222370]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:39:58 compute-0 sudo[222356]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:59 compute-0 sudo[222520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxthkzidzazktmgpiawjbgnmpwwnzvkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947598.8849452-337-34163648092934/AnsiballZ_stat.py'
Feb 24 15:39:59 compute-0 sudo[222520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:59 compute-0 python3.9[222523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:39:59 compute-0 sudo[222520]: pam_unix(sudo:session): session closed for user root
Feb 24 15:39:59 compute-0 podman[204685]: time="2026-02-24T15:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:39:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21987 "" "Go-http-client/1.1"
Feb 24 15:39:59 compute-0 sudo[222655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynvywfktxcpvvcacnpeixfmalqwkfxdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947598.8849452-337-34163648092934/AnsiballZ_copy.py'
Feb 24 15:39:59 compute-0 sudo[222655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:39:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3002 "" "Go-http-client/1.1"
Feb 24 15:39:59 compute-0 podman[222618]: 2026-02-24 15:39:59.783137381 +0000 UTC m=+0.071625047 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:39:59 compute-0 python3.9[222660]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947598.8849452-337-34163648092934/.source.json _original_basename=.qqrmbdnn follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:39:59 compute-0 sudo[222655]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:00 compute-0 python3.9[222814]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:01 compute-0 openstack_network_exporter[207830]: ERROR   15:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:40:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:40:01 compute-0 openstack_network_exporter[207830]: ERROR   15:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:40:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:40:02 compute-0 sudo[223235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkvcsrljuyazbvrsgezkqyevldotnegt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947602.2794688-377-40510429042620/AnsiballZ_container_config_data.py'
Feb 24 15:40:02 compute-0 sudo[223235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:03 compute-0 python3.9[223238]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Feb 24 15:40:03 compute-0 sudo[223235]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:03 compute-0 sudo[223388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmastioctmrykpkxluphejettqgcnhgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947603.4128675-388-123963097858897/AnsiballZ_container_config_hash.py'
Feb 24 15:40:03 compute-0 sudo[223388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:04 compute-0 python3.9[223391]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:40:04 compute-0 sudo[223388]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:04 compute-0 podman[223392]: 2026-02-24 15:40:04.223294125 +0000 UTC m=+0.063894041 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=openstack_network_exporter, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git)
Feb 24 15:40:04 compute-0 sudo[223559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uooephclnprpnbidkscnyouuiwahhiqt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947604.471683-398-228841232088703/AnsiballZ_edpm_container_manage.py'
Feb 24 15:40:04 compute-0 sudo[223559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:05 compute-0 python3[223562]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:40:05 compute-0 podman[223599]: 2026-02-24 15:40:05.471479184 +0000 UTC m=+0.060792133 container create e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 24 15:40:05 compute-0 podman[223599]: 2026-02-24 15:40:05.441397732 +0000 UTC m=+0.030710711 image pull 20914a1cbbac726a2580da2b97a9d453e7e0538b5e06ae0c9613bcea0e3e5de9 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Feb 24 15:40:05 compute-0 python3[223562]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49 --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Feb 24 15:40:06 compute-0 sudo[223559]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:07 compute-0 sudo[223788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkbyjzmmrpwjgrmkipvzdcgcjgufgsvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947606.8487964-406-95711241891756/AnsiballZ_stat.py'
Feb 24 15:40:07 compute-0 sudo[223788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:07 compute-0 python3.9[223791]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:40:07 compute-0 sudo[223788]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:08 compute-0 sudo[223943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnouqsmhwyxxmbjxadwuajmkvdbprfqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947607.721901-415-61587887826439/AnsiballZ_file.py'
Feb 24 15:40:08 compute-0 sudo[223943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:08 compute-0 python3.9[223946]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:08 compute-0 sudo[223943]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:08 compute-0 sudo[224020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjtxozlophgjhikjqvzygztutlznyjun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947607.721901-415-61587887826439/AnsiballZ_stat.py'
Feb 24 15:40:08 compute-0 sudo[224020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:08 compute-0 python3.9[224023]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:40:08 compute-0 sudo[224020]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:09 compute-0 sudo[224172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfrtsxkjpfqacmxjsjevojmouczykihf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947608.8137999-415-1626284812046/AnsiballZ_copy.py'
Feb 24 15:40:09 compute-0 sudo[224172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:09 compute-0 python3.9[224175]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771947608.8137999-415-1626284812046/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:09 compute-0 sudo[224172]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:09 compute-0 podman[224176]: 2026-02-24 15:40:09.668406547 +0000 UTC m=+0.123169601 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 24 15:40:10 compute-0 sudo[224286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbxrotpkyabydjjrhshtrsuafpvtgdde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947608.8137999-415-1626284812046/AnsiballZ_systemd.py'
Feb 24 15:40:10 compute-0 sudo[224286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:10 compute-0 podman[224249]: 2026-02-24 15:40:10.171439746 +0000 UTC m=+0.080678580 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 15:40:10 compute-0 python3.9[224297]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:40:10 compute-0 systemd[1]: Reloading.
Feb 24 15:40:10 compute-0 systemd-rc-local-generator[224321]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:40:10 compute-0 systemd-sysv-generator[224324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:40:10 compute-0 sudo[224286]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:11 compute-0 sudo[224414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsukczflthtslfvusxorsfcwppwzniwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947608.8137999-415-1626284812046/AnsiballZ_systemd.py'
Feb 24 15:40:11 compute-0 sudo[224414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:11 compute-0 python3.9[224417]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:40:11 compute-0 systemd[1]: Reloading.
Feb 24 15:40:11 compute-0 systemd-rc-local-generator[224440]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:40:11 compute-0 systemd-sysv-generator[224443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:40:11 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Feb 24 15:40:11 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd8e9c416cf02947de9c55f60f14300e82b4a6d8a32f5dd9442260536398f7/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 24 15:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd8e9c416cf02947de9c55f60f14300e82b4a6d8a32f5dd9442260536398f7/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Feb 24 15:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd8e9c416cf02947de9c55f60f14300e82b4a6d8a32f5dd9442260536398f7/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Feb 24 15:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd8e9c416cf02947de9c55f60f14300e82b4a6d8a32f5dd9442260536398f7/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Feb 24 15:40:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733.
Feb 24 15:40:12 compute-0 podman[224465]: 2026-02-24 15:40:12.003230783 +0000 UTC m=+0.128564102 container init e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: + sudo -E kolla_set_configs
Feb 24 15:40:12 compute-0 sudo[224486]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 24 15:40:12 compute-0 podman[224465]: 2026-02-24 15:40:12.034110488 +0000 UTC m=+0.159443757 container start e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:40:12 compute-0 sudo[224486]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 24 15:40:12 compute-0 sudo[224486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 24 15:40:12 compute-0 podman[224465]: ceilometer_agent_ipmi
Feb 24 15:40:12 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Feb 24 15:40:12 compute-0 sudo[224414]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Validating config file
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Copying service configuration files
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: INFO:__main__:Writing out command to execute
Feb 24 15:40:12 compute-0 podman[224487]: 2026-02-24 15:40:12.111248769 +0000 UTC m=+0.061645398 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:40:12 compute-0 sudo[224486]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: ++ cat /run_command
Feb 24 15:40:12 compute-0 systemd[1]: e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733-43b8b6a3c2cda5df.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:40:12 compute-0 systemd[1]: e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733-43b8b6a3c2cda5df.service: Failed with result 'exit-code'.
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: + ARGS=
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: + sudo kolla_copy_cacerts
Feb 24 15:40:12 compute-0 sudo[224511]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Feb 24 15:40:12 compute-0 sudo[224511]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 24 15:40:12 compute-0 sudo[224511]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 24 15:40:12 compute-0 sudo[224511]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: + [[ ! -n '' ]]
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: + . kolla_extend_start
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: + umask 0022
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.857 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.858 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.858 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.858 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.858 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.858 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.858 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.858 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.858 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.858 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.859 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.859 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.859 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.859 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.859 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.859 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.859 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.860 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.860 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.860 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.860 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.860 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.860 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.860 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.860 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.861 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.862 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.862 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.862 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.862 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.862 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.862 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.862 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.862 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.863 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.863 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.863 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.863 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.863 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.863 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.863 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.863 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.863 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.864 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.864 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.864 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.864 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.864 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.864 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.864 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.864 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.864 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.865 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.865 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.865 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.865 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.865 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.865 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.865 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.865 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.865 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.866 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.866 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.866 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.866 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.866 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.866 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.866 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.866 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.866 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.867 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.867 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.867 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.867 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.867 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.867 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.867 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.867 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.867 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.868 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.868 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.868 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.868 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.868 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.868 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.868 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.868 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.868 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.869 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.869 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.869 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.869 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.869 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.869 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.869 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.869 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.870 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.870 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.870 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.870 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.870 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.870 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.870 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.870 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.871 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.871 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.871 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.871 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.871 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.871 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.871 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.871 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.871 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.872 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.872 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.872 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.872 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.872 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.872 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.872 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.872 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.873 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.873 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.873 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.873 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.873 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.873 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.873 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.873 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.874 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.874 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.874 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.874 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.874 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.874 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.875 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.875 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.875 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.875 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.876 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.876 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.894 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.895 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.896 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Feb 24 15:40:12 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:12.992 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp1jg1q91a/privsep.sock']
Feb 24 15:40:13 compute-0 sudo[224666]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp1jg1q91a/privsep.sock
Feb 24 15:40:13 compute-0 sudo[224666]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 24 15:40:13 compute-0 sudo[224666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 24 15:40:13 compute-0 python3.9[224667]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:40:13 compute-0 sudo[224666]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.593 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.594 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1jg1q91a/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.488 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.493 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.497 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.497 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.710 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.711 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.712 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.712 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.712 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.714 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.714 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.714 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.718 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.719 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.719 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.719 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.719 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.719 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.719 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.720 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.720 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.720 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.720 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.720 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.720 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.721 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.721 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.721 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.722 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.722 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.722 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.722 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.722 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.722 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.723 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.723 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.723 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.723 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.723 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.723 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.724 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.724 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.724 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.724 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.724 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.724 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.725 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.725 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.725 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.725 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.725 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.725 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.726 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.726 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.726 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.726 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.726 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.727 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.727 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.727 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.727 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.727 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.727 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.728 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.728 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.728 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.728 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.728 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.728 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.729 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.729 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.729 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.729 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.729 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.729 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.730 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.730 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.730 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.730 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.730 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.731 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.731 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.731 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.731 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.731 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.731 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.732 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.732 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.732 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.732 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.732 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.733 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.733 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.733 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.733 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.733 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.733 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.734 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.734 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.734 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.734 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.734 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.734 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.735 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.735 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.735 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.735 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.735 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.736 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.736 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.736 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.736 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.736 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.736 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.737 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.737 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.737 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.737 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.737 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.738 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.738 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.738 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.738 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.738 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.739 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.739 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.739 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.739 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.739 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.739 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.740 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.740 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.740 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.740 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.740 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.741 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.741 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.741 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.741 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.741 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.741 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.742 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.742 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.742 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.742 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.742 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.743 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.743 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.743 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.743 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.743 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.743 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.744 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.744 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.744 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.744 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.744 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.744 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.745 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.745 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.745 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.745 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.745 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.745 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.746 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.746 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.746 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.746 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.746 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.746 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.747 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.747 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.747 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.747 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.747 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.747 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.748 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.748 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.748 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.748 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.748 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.748 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.749 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.749 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.749 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.749 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.749 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.749 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.750 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.750 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.750 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.750 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.750 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.754 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.754 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.754 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.754 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.754 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Feb 24 15:40:13 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:13.759 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Feb 24 15:40:13 compute-0 sudo[224825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgpohtfatrwypzrseqmeemihtswhlved ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947613.679971-460-974974119834/AnsiballZ_stat.py'
Feb 24 15:40:13 compute-0 sudo[224825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:14 compute-0 python3.9[224828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:40:14 compute-0 sudo[224825]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:14 compute-0 sudo[224951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkijwwouqntahtqjdoikmitqxdwgfbun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947613.679971-460-974974119834/AnsiballZ_copy.py'
Feb 24 15:40:14 compute-0 sudo[224951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:14 compute-0 python3.9[224954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947613.679971-460-974974119834/.source.yaml _original_basename=.4r7l8gqi follow=False checksum=b0179b99e27dfa296d8da84bf797349ab6a047b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:14 compute-0 sudo[224951]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:15 compute-0 sudo[225104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oideoibpqywgvzxxqdapeuosagmymhkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947615.1368492-477-247020001362734/AnsiballZ_file.py'
Feb 24 15:40:15 compute-0 sudo[225104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:15 compute-0 python3.9[225107]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:15 compute-0 sudo[225104]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:15 compute-0 podman[225108]: 2026-02-24 15:40:15.712400413 +0000 UTC m=+0.066850903 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:40:16 compute-0 sudo[225280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdokyrhgnatzhhweoxxmwngejzhkvrli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947615.8454425-485-178105234368209/AnsiballZ_file.py'
Feb 24 15:40:16 compute-0 sudo[225280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:16 compute-0 python3.9[225283]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 24 15:40:16 compute-0 sudo[225280]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:17 compute-0 python3.9[225433]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:19 compute-0 sudo[225854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivvkadqfnzvqayqbkmumlmvqtfaxgncn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947618.706958-519-171891348146611/AnsiballZ_container_config_data.py'
Feb 24 15:40:19 compute-0 sudo[225854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:19 compute-0 python3.9[225857]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Feb 24 15:40:19 compute-0 sudo[225854]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:19 compute-0 sudo[226007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bonkekzetzqmvismdbqgqacbfssmhzzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947619.726838-530-174611120863164/AnsiballZ_container_config_hash.py'
Feb 24 15:40:19 compute-0 sudo[226007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:20 compute-0 python3.9[226010]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 24 15:40:20 compute-0 sudo[226007]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:20 compute-0 sudo[226160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcvswjkfspvstzczryvvwcmmdsnzmsdo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947620.5204358-540-102250266090251/AnsiballZ_edpm_container_manage.py'
Feb 24 15:40:20 compute-0 sudo[226160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:21 compute-0 python3[226163]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Feb 24 15:40:21 compute-0 podman[226200]: 2026-02-24 15:40:21.280443518 +0000 UTC m=+0.055855335 container create 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, container_name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, vcs-type=git, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30)
Feb 24 15:40:21 compute-0 podman[226200]: 2026-02-24 15:40:21.251694013 +0000 UTC m=+0.027105810 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Feb 24 15:40:21 compute-0 python3[226163]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Feb 24 15:40:21 compute-0 sudo[226160]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:21 compute-0 sudo[226388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oulkdmtyxxfpazykrtsvrbyymhnfcrno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947621.631434-548-252815991315192/AnsiballZ_stat.py'
Feb 24 15:40:21 compute-0 sudo[226388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:22 compute-0 python3.9[226391]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:40:22 compute-0 sudo[226388]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:22 compute-0 sudo[226543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daapokynpavlngqttekuiejoahnwddwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947622.4259617-557-121582463631593/AnsiballZ_file.py'
Feb 24 15:40:22 compute-0 sudo[226543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:22 compute-0 python3.9[226546]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:22 compute-0 sudo[226543]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:23 compute-0 sudo[226620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmntvnewoddnfxaaqeiakypffztmqovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947622.4259617-557-121582463631593/AnsiballZ_stat.py'
Feb 24 15:40:23 compute-0 sudo[226620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:23 compute-0 python3.9[226623]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:40:23 compute-0 sudo[226620]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:23 compute-0 sudo[226772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liytrvpijwximcyascnhxldtdutbgcqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947623.4989247-557-40783452114630/AnsiballZ_copy.py'
Feb 24 15:40:23 compute-0 sudo[226772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:24 compute-0 python3.9[226775]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771947623.4989247-557-40783452114630/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:24 compute-0 sudo[226772]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:24 compute-0 sudo[226849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fobkqqwnowloqhqriyoutpgkvuqtyois ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947623.4989247-557-40783452114630/AnsiballZ_systemd.py'
Feb 24 15:40:24 compute-0 sudo[226849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:24 compute-0 python3.9[226852]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 24 15:40:24 compute-0 systemd[1]: Reloading.
Feb 24 15:40:24 compute-0 systemd-rc-local-generator[226879]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:40:24 compute-0 systemd-sysv-generator[226882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:40:25 compute-0 sudo[226849]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:25 compute-0 sudo[226968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezvlnpkujkvzavkyltllhuwxkznvsgxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947623.4989247-557-40783452114630/AnsiballZ_systemd.py'
Feb 24 15:40:25 compute-0 sudo[226968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:25 compute-0 python3.9[226971]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 24 15:40:25 compute-0 systemd[1]: Reloading.
Feb 24 15:40:25 compute-0 systemd-rc-local-generator[226996]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 24 15:40:25 compute-0 systemd-sysv-generator[226999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 24 15:40:26 compute-0 systemd[1]: Starting kepler container...
Feb 24 15:40:26 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:40:26 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561.
Feb 24 15:40:26 compute-0 podman[227018]: 2026-02-24 15:40:26.238721055 +0000 UTC m=+0.135940239 container init 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 15:40:26 compute-0 podman[227018]: 2026-02-24 15:40:26.270432413 +0000 UTC m=+0.167651617 container start 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, release=1214.1726694543, config_id=kepler, maintainer=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Feb 24 15:40:26 compute-0 podman[227018]: kepler
Feb 24 15:40:26 compute-0 systemd[1]: Started kepler container.
Feb 24 15:40:26 compute-0 kepler[227033]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.297269       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.297487       1 config.go:293] using gCgroup ID in the BPF program: true
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.297511       1 config.go:295] kernel version: 5.14
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.298377       1 power.go:78] Unable to obtain power, use estimate method
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.298422       1 redfish.go:169] failed to get redfish credential file path
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.299035       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.299049       1 power.go:79] using none to obtain power
Feb 24 15:40:26 compute-0 kepler[227033]: E0224 15:40:26.299112       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Feb 24 15:40:26 compute-0 kepler[227033]: E0224 15:40:26.299153       1 exporter.go:154] failed to init GPU accelerators: no devices found
Feb 24 15:40:26 compute-0 kepler[227033]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.301642       1 exporter.go:84] Number of CPUs: 8
Feb 24 15:40:26 compute-0 sudo[226968]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:26 compute-0 podman[227038]: 2026-02-24 15:40:26.355762143 +0000 UTC m=+0.074157369 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, com.redhat.component=ubi9-container, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64)
Feb 24 15:40:26 compute-0 systemd[1]: 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561-47d68638a8647a32.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:40:26 compute-0 systemd[1]: 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561-47d68638a8647a32.service: Failed with result 'exit-code'.
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.710980       1 watcher.go:83] Using in cluster k8s config
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.711025       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Feb 24 15:40:26 compute-0 kepler[227033]: E0224 15:40:26.711133       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.714680       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.714724       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.719236       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.719280       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.730404       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.730449       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.730475       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.738776       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.738827       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.738835       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.738843       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.738853       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.738871       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.738971       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.739035       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.739062       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.739143       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.739365       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Feb 24 15:40:26 compute-0 kepler[227033]: I0224 15:40:26.740186       1 exporter.go:208] Started Kepler in 443.242595ms
Feb 24 15:40:27 compute-0 python3.9[227225]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 24 15:40:28 compute-0 sudo[227375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htzgmthiyroqciwyckpbwqxxbyfhvpyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947627.8754685-602-101021779205154/AnsiballZ_stat.py'
Feb 24 15:40:28 compute-0 sudo[227375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:28 compute-0 python3.9[227378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:40:28 compute-0 sudo[227375]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:28 compute-0 podman[227379]: 2026-02-24 15:40:28.607364577 +0000 UTC m=+0.087208803 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:40:29 compute-0 sudo[227525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmdkdckdibhqjnvhdhsoigulaazhpptt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947627.8754685-602-101021779205154/AnsiballZ_copy.py'
Feb 24 15:40:29 compute-0 sudo[227525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:29 compute-0 python3.9[227528]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947627.8754685-602-101021779205154/.source.yaml _original_basename=.cba2ftb2 follow=False checksum=7e34c11a496e3e690bfcfec8f8efddcc6bea5609 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:29 compute-0 sudo[227525]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:29 compute-0 podman[204685]: time="2026-02-24T15:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:40:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28010 "" "Go-http-client/1.1"
Feb 24 15:40:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3843 "" "Go-http-client/1.1"
Feb 24 15:40:29 compute-0 sudo[227695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khxbeibywbojanahnlgivcnknyiwsobg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947629.5542934-617-20436166867719/AnsiballZ_systemd.py'
Feb 24 15:40:29 compute-0 sudo[227695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:30 compute-0 podman[227652]: 2026-02-24 15:40:30.028790851 +0000 UTC m=+0.117037189 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:40:30 compute-0 python3.9[227700]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:40:30 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Feb 24 15:40:30 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:30.430 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Feb 24 15:40:30 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:30.534 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Feb 24 15:40:30 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:30.534 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Feb 24 15:40:30 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:30.535 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Feb 24 15:40:30 compute-0 ceilometer_agent_ipmi[224480]: 2026-02-24 15:40:30.543 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Feb 24 15:40:30 compute-0 systemd[1]: libpod-e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733.scope: Deactivated successfully.
Feb 24 15:40:30 compute-0 systemd[1]: libpod-e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733.scope: Consumed 1.995s CPU time.
Feb 24 15:40:30 compute-0 podman[227704]: 2026-02-24 15:40:30.702301265 +0000 UTC m=+0.338659796 container died e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Feb 24 15:40:30 compute-0 systemd[1]: e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733-43b8b6a3c2cda5df.timer: Deactivated successfully.
Feb 24 15:40:30 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733.
Feb 24 15:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733-userdata-shm.mount: Deactivated successfully.
Feb 24 15:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecdd8e9c416cf02947de9c55f60f14300e82b4a6d8a32f5dd9442260536398f7-merged.mount: Deactivated successfully.
Feb 24 15:40:30 compute-0 podman[227704]: 2026-02-24 15:40:30.775999019 +0000 UTC m=+0.412357500 container cleanup e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:40:30 compute-0 podman[227704]: ceilometer_agent_ipmi
Feb 24 15:40:30 compute-0 podman[227731]: ceilometer_agent_ipmi
Feb 24 15:40:30 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Feb 24 15:40:30 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Feb 24 15:40:30 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Feb 24 15:40:31 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd8e9c416cf02947de9c55f60f14300e82b4a6d8a32f5dd9442260536398f7/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 24 15:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd8e9c416cf02947de9c55f60f14300e82b4a6d8a32f5dd9442260536398f7/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Feb 24 15:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd8e9c416cf02947de9c55f60f14300e82b4a6d8a32f5dd9442260536398f7/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Feb 24 15:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd8e9c416cf02947de9c55f60f14300e82b4a6d8a32f5dd9442260536398f7/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Feb 24 15:40:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733.
Feb 24 15:40:31 compute-0 podman[227745]: 2026-02-24 15:40:31.099893122 +0000 UTC m=+0.183923674 container init e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: + sudo -E kolla_set_configs
Feb 24 15:40:31 compute-0 sudo[227765]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 24 15:40:31 compute-0 podman[227745]: 2026-02-24 15:40:31.140797157 +0000 UTC m=+0.224827629 container start e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 24 15:40:31 compute-0 sudo[227765]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 24 15:40:31 compute-0 sudo[227765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 24 15:40:31 compute-0 podman[227745]: ceilometer_agent_ipmi
Feb 24 15:40:31 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Validating config file
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Copying service configuration files
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: INFO:__main__:Writing out command to execute
Feb 24 15:40:31 compute-0 sudo[227765]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: ++ cat /run_command
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: + ARGS=
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: + sudo kolla_copy_cacerts
Feb 24 15:40:31 compute-0 sudo[227695]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:31 compute-0 sudo[227780]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Feb 24 15:40:31 compute-0 sudo[227780]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 24 15:40:31 compute-0 sudo[227780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 24 15:40:31 compute-0 sudo[227780]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: + [[ ! -n '' ]]
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: + . kolla_extend_start
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: + umask 0022
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Feb 24 15:40:31 compute-0 podman[227766]: 2026-02-24 15:40:31.26552377 +0000 UTC m=+0.109608910 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 15:40:31 compute-0 systemd[1]: e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733-51eab3f522c421a5.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:40:31 compute-0 systemd[1]: e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733-51eab3f522c421a5.service: Failed with result 'exit-code'.
Feb 24 15:40:31 compute-0 openstack_network_exporter[207830]: ERROR   15:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:40:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:40:31 compute-0 openstack_network_exporter[207830]: ERROR   15:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:40:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:40:31 compute-0 sudo[227940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dszdwopynuxdxngffjjabmfdfnxcfduv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947631.3888693-625-245256288500707/AnsiballZ_systemd.py'
Feb 24 15:40:31 compute-0 sudo[227940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.897 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.898 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.899 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.900 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.901 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.902 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.903 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.904 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.905 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.906 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.907 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.908 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.909 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.910 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.911 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.911 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.911 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.911 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.911 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.911 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.911 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.911 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.911 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.930 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.932 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.933 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Feb 24 15:40:31 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:31.952 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpgnn_xewj/privsep.sock']
Feb 24 15:40:31 compute-0 sudo[227948]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpgnn_xewj/privsep.sock
Feb 24 15:40:31 compute-0 sudo[227948]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 24 15:40:31 compute-0 sudo[227948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 24 15:40:32 compute-0 python3.9[227943]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:40:32 compute-0 systemd[1]: Stopping kepler container...
Feb 24 15:40:32 compute-0 kepler[227033]: I0224 15:40:32.282963       1 exporter.go:218] Received shutdown signal
Feb 24 15:40:32 compute-0 kepler[227033]: I0224 15:40:32.283358       1 exporter.go:226] Exiting...
Feb 24 15:40:32 compute-0 systemd[1]: libpod-106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561.scope: Deactivated successfully.
Feb 24 15:40:32 compute-0 podman[227955]: 2026-02-24 15:40:32.473042831 +0000 UTC m=+0.263617874 container died 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, config_id=kepler, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543)
Feb 24 15:40:32 compute-0 systemd[1]: 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561-47d68638a8647a32.timer: Deactivated successfully.
Feb 24 15:40:32 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561.
Feb 24 15:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561-userdata-shm.mount: Deactivated successfully.
Feb 24 15:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bc4a1ebb993a5161409b4cb9ea041abd8c5184fd16cb2b0912351b97f9f4c13-merged.mount: Deactivated successfully.
Feb 24 15:40:32 compute-0 podman[227955]: 2026-02-24 15:40:32.514663477 +0000 UTC m=+0.305238520 container cleanup 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, release=1214.1726694543, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 15:40:32 compute-0 podman[227955]: kepler
Feb 24 15:40:32 compute-0 sudo[227948]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.579 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.580 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgnn_xewj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.472 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.475 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.476 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.476 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Feb 24 15:40:32 compute-0 podman[227982]: kepler
Feb 24 15:40:32 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Feb 24 15:40:32 compute-0 systemd[1]: Stopped kepler container.
Feb 24 15:40:32 compute-0 systemd[1]: Starting kepler container...
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.691 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.692 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.693 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.694 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.694 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.694 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.694 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.695 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.695 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.695 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.695 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.695 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.696 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.700 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.700 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.700 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.701 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.701 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.701 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.701 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.701 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.701 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.702 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.702 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.702 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.702 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.703 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.703 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.703 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.703 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.703 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.704 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.704 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.704 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.704 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.704 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.704 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.705 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.705 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.705 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.705 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.705 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.705 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.706 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.706 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.706 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.706 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.706 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.706 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.707 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.707 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.707 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.707 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.707 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.707 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.708 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.708 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.708 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.708 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.708 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.708 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.709 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.709 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.709 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.709 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.709 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.709 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.710 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.710 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.710 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.710 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.710 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.710 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.711 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.711 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.711 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.711 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.711 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.712 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.712 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.712 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.712 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.712 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.712 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.713 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.713 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.713 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.714 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.714 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.714 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.714 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.714 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.714 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.715 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.715 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.715 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.715 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.715 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.715 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.716 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.716 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.716 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.717 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.717 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.717 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.717 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.718 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.718 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.718 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.719 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.719 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.719 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.719 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.720 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.720 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.720 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.721 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.723 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.724 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.724 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.724 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.725 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.725 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.725 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.725 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.726 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.726 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.726 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.726 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.727 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.727 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.727 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.728 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.728 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.728 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.728 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.729 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.729 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.729 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.731 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.731 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.731 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.731 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.732 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.732 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.732 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.732 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.733 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.733 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.733 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.733 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.734 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.734 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.734 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.734 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.735 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.735 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.735 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.735 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.735 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.736 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.736 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.736 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.736 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.736 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.736 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.737 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.737 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.737 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.737 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.737 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.737 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.738 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.738 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.738 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.738 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.738 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.738 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.739 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.739 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.739 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.739 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.739 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.740 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.740 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.740 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.740 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.740 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.740 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.741 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.741 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.741 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.742 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.742 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.742 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.742 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561.
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.743 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.743 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.743 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.743 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.744 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.744 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.744 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Feb 24 15:40:32 compute-0 ceilometer_agent_ipmi[227758]: 2026-02-24 15:40:32.747 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Feb 24 15:40:32 compute-0 podman[227998]: 2026-02-24 15:40:32.749805044 +0000 UTC m=+0.122510443 container init 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, version=9.4, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543)
Feb 24 15:40:32 compute-0 kepler[228015]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 24 15:40:32 compute-0 podman[227998]: 2026-02-24 15:40:32.775107472 +0000 UTC m=+0.147812881 container start 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-type=git, com.redhat.component=ubi9-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Feb 24 15:40:32 compute-0 podman[227998]: kepler
Feb 24 15:40:32 compute-0 systemd[1]: Started kepler container.
Feb 24 15:40:32 compute-0 kepler[228015]: I0224 15:40:32.785961       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Feb 24 15:40:32 compute-0 kepler[228015]: I0224 15:40:32.786348       1 config.go:293] using gCgroup ID in the BPF program: true
Feb 24 15:40:32 compute-0 kepler[228015]: I0224 15:40:32.786383       1 config.go:295] kernel version: 5.14
Feb 24 15:40:32 compute-0 kepler[228015]: I0224 15:40:32.787175       1 power.go:78] Unable to obtain power, use estimate method
Feb 24 15:40:32 compute-0 kepler[228015]: I0224 15:40:32.787220       1 redfish.go:169] failed to get redfish credential file path
Feb 24 15:40:32 compute-0 kepler[228015]: I0224 15:40:32.787767       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Feb 24 15:40:32 compute-0 kepler[228015]: I0224 15:40:32.787796       1 power.go:79] using none to obtain power
Feb 24 15:40:32 compute-0 kepler[228015]: E0224 15:40:32.787824       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Feb 24 15:40:32 compute-0 kepler[228015]: E0224 15:40:32.787861       1 exporter.go:154] failed to init GPU accelerators: no devices found
Feb 24 15:40:32 compute-0 kepler[228015]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 24 15:40:32 compute-0 kepler[228015]: I0224 15:40:32.790766       1 exporter.go:84] Number of CPUs: 8
Feb 24 15:40:32 compute-0 sudo[227940]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:32 compute-0 podman[228026]: 2026-02-24 15:40:32.863488928 +0000 UTC m=+0.079182009 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, name=ubi9, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=)
Feb 24 15:40:32 compute-0 systemd[1]: 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561-4d360da259988ac5.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:40:32 compute-0 systemd[1]: 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561-4d360da259988ac5.service: Failed with result 'exit-code'.
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.222541       1 watcher.go:83] Using in cluster k8s config
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.222616       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Feb 24 15:40:33 compute-0 kepler[228015]: E0224 15:40:33.222746       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.230033       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.230157       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.236713       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.236766       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.247704       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.247759       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.247778       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.259785       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.259842       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.259850       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.259858       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.259866       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.259881       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.259984       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.260021       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.260049       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.260148       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.260677       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Feb 24 15:40:33 compute-0 kepler[228015]: I0224 15:40:33.261269       1 exporter.go:208] Started Kepler in 475.563629ms
Feb 24 15:40:33 compute-0 sudo[228209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-occopwsyktvpguwdcefqbnhkgbqdtazn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947633.050551-633-240872125045940/AnsiballZ_find.py'
Feb 24 15:40:33 compute-0 sudo[228209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:33 compute-0 python3.9[228212]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 24 15:40:33 compute-0 sudo[228209]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:34 compute-0 sudo[228377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txonfuxsldlawrrikhzczwbydabdaklh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947634.1834428-643-39159588390015/AnsiballZ_podman_container_info.py'
Feb 24 15:40:34 compute-0 sudo[228377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:34 compute-0 podman[228336]: 2026-02-24 15:40:34.743933827 +0000 UTC m=+0.105449305 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.7, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Feb 24 15:40:34 compute-0 python3.9[228384]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Feb 24 15:40:35 compute-0 sudo[228377]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:35 compute-0 sudo[228546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meedfksanpqpzvnskecpsfzvnnvnemth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947635.3657892-651-69282308770318/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:36 compute-0 sudo[228546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:36 compute-0 python3.9[228549]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:36 compute-0 systemd[1]: Started libpod-conmon-6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52.scope.
Feb 24 15:40:36 compute-0 podman[228550]: 2026-02-24 15:40:36.360965428 +0000 UTC m=+0.140409553 container exec 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:40:36 compute-0 podman[228550]: 2026-02-24 15:40:36.396270937 +0000 UTC m=+0.175715012 container exec_died 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Feb 24 15:40:36 compute-0 sudo[228546]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:36 compute-0 systemd[1]: libpod-conmon-6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52.scope: Deactivated successfully.
Feb 24 15:40:37 compute-0 sudo[228731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knminkzoaxfpjqagxbbvjizrqhubkqyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947636.6785731-659-63619408366031/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:37 compute-0 sudo[228731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:37 compute-0 python3.9[228734]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:37 compute-0 systemd[1]: Started libpod-conmon-6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52.scope.
Feb 24 15:40:37 compute-0 podman[228735]: 2026-02-24 15:40:37.421521304 +0000 UTC m=+0.124602571 container exec 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 15:40:37 compute-0 podman[228735]: 2026-02-24 15:40:37.454571399 +0000 UTC m=+0.157652646 container exec_died 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 15:40:37 compute-0 systemd[1]: libpod-conmon-6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52.scope: Deactivated successfully.
Feb 24 15:40:37 compute-0 sudo[228731]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:38 compute-0 sudo[228916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocjeqkiwhegowzyfgkppvbqtzcnwkvmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947637.7993634-667-166116640398858/AnsiballZ_file.py'
Feb 24 15:40:38 compute-0 sudo[228916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:38 compute-0 python3.9[228919]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:38 compute-0 sudo[228916]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:38 compute-0 nova_compute[188703]: 2026-02-24 15:40:38.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:39 compute-0 nova_compute[188703]: 2026-02-24 15:40:39.150 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:39 compute-0 nova_compute[188703]: 2026-02-24 15:40:39.150 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:40:39 compute-0 nova_compute[188703]: 2026-02-24 15:40:39.151 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:40:39 compute-0 nova_compute[188703]: 2026-02-24 15:40:39.173 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 15:40:39 compute-0 nova_compute[188703]: 2026-02-24 15:40:39.174 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:39 compute-0 nova_compute[188703]: 2026-02-24 15:40:39.175 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:39 compute-0 sudo[229072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyoqnuopeprdhjzwojtrfcepxtxoqmqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947638.7106726-676-197748205097111/AnsiballZ_podman_container_info.py'
Feb 24 15:40:39 compute-0 sudo[229072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:39 compute-0 python3.9[229075]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Feb 24 15:40:39 compute-0 sudo[229072]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:40 compute-0 podman[229163]: 2026-02-24 15:40:40.217906848 +0000 UTC m=+0.164056986 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 15:40:40 compute-0 podman[229234]: 2026-02-24 15:40:40.361630713 +0000 UTC m=+0.108603033 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 24 15:40:40 compute-0 sudo[229278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpxaaekmiaciueiqkkdalpzltbpnotfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947639.8843613-684-196651313340040/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:40 compute-0 sudo[229278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:40 compute-0 python3.9[229283]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:40 compute-0 systemd[1]: Started libpod-conmon-e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9.scope.
Feb 24 15:40:40 compute-0 podman[229284]: 2026-02-24 15:40:40.752306105 +0000 UTC m=+0.131457962 container exec e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:40:40 compute-0 podman[229284]: 2026-02-24 15:40:40.786496453 +0000 UTC m=+0.165648290 container exec_died e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:40:40 compute-0 sudo[229278]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:40 compute-0 systemd[1]: libpod-conmon-e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9.scope: Deactivated successfully.
Feb 24 15:40:40 compute-0 nova_compute[188703]: 2026-02-24 15:40:40.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:40 compute-0 nova_compute[188703]: 2026-02-24 15:40:40.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:40 compute-0 nova_compute[188703]: 2026-02-24 15:40:40.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.016 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.017 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.018 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.018 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.477 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.479 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5705MB free_disk=72.29655838012695GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.479 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.480 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:40:41 compute-0 sudo[229465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txcqxgocjcldsufwfutnmemtvpwctqqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947641.1294458-692-87461652975320/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.558 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.559 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:40:41 compute-0 sudo[229465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.592 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.612 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.615 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:40:41 compute-0 nova_compute[188703]: 2026-02-24 15:40:41.615 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:40:41 compute-0 python3.9[229468]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:41 compute-0 systemd[1]: Started libpod-conmon-e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9.scope.
Feb 24 15:40:41 compute-0 podman[229469]: 2026-02-24 15:40:41.911212955 +0000 UTC m=+0.120606049 container exec e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:40:41 compute-0 podman[229469]: 2026-02-24 15:40:41.943517 +0000 UTC m=+0.152910084 container exec_died e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 15:40:42 compute-0 systemd[1]: libpod-conmon-e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9.scope: Deactivated successfully.
Feb 24 15:40:42 compute-0 sudo[229465]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:42 compute-0 nova_compute[188703]: 2026-02-24 15:40:42.616 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:42 compute-0 nova_compute[188703]: 2026-02-24 15:40:42.617 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:42 compute-0 nova_compute[188703]: 2026-02-24 15:40:42.618 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:40:42 compute-0 sudo[229648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqoyftamzjqdvbhcnuypbnccphkuzppm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947642.251727-700-200787987714325/AnsiballZ_file.py'
Feb 24 15:40:42 compute-0 sudo[229648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:42 compute-0 python3.9[229651]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:42 compute-0 sudo[229648]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:43 compute-0 sudo[229801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwtrazbibwrvbdsgaexxbutxkjhgoteq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947643.1647265-709-30128271146334/AnsiballZ_podman_container_info.py'
Feb 24 15:40:43 compute-0 sudo[229801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:43 compute-0 python3.9[229804]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Feb 24 15:40:43 compute-0 sudo[229801]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:43 compute-0 nova_compute[188703]: 2026-02-24 15:40:43.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:40:44 compute-0 sudo[229967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzbyrurikjxegwfjgnqcxmntoaqglqgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947644.144446-717-104615585543009/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:44 compute-0 sudo[229967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:44 compute-0 python3.9[229970]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:44 compute-0 systemd[1]: Started libpod-conmon-2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda.scope.
Feb 24 15:40:44 compute-0 podman[229971]: 2026-02-24 15:40:44.900543583 +0000 UTC m=+0.132733679 container exec 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 15:40:44 compute-0 podman[229971]: 2026-02-24 15:40:44.937382815 +0000 UTC m=+0.169572901 container exec_died 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 15:40:44 compute-0 systemd[1]: libpod-conmon-2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda.scope: Deactivated successfully.
Feb 24 15:40:45 compute-0 sudo[229967]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:45 compute-0 sudo[230151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrbhkvvvhmlwsiafalraeqxeqxgexmud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947645.2759366-725-51039253677739/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:45 compute-0 sudo[230151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:45 compute-0 python3.9[230154]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:46 compute-0 systemd[1]: Started libpod-conmon-2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda.scope.
Feb 24 15:40:46 compute-0 podman[230155]: 2026-02-24 15:40:46.044665119 +0000 UTC m=+0.131123193 container exec 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 15:40:46 compute-0 podman[230155]: 2026-02-24 15:40:46.077399506 +0000 UTC m=+0.163857580 container exec_died 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 15:40:46 compute-0 sudo[230151]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:46 compute-0 systemd[1]: libpod-conmon-2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda.scope: Deactivated successfully.
Feb 24 15:40:46 compute-0 podman[230171]: 2026-02-24 15:40:46.197932592 +0000 UTC m=+0.154288862 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:40:46 compute-0 sudo[230356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeglmakbfimsdpniucyfzydsezdrihxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947646.3654108-733-78587240594122/AnsiballZ_file.py'
Feb 24 15:40:46 compute-0 sudo[230356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:47 compute-0 python3.9[230359]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:47 compute-0 sudo[230356]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:47 compute-0 sudo[230509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snaworoupensrjnrqsmjakvusdabtvug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947647.4278975-742-234421873559164/AnsiballZ_podman_container_info.py'
Feb 24 15:40:47 compute-0 sudo[230509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:48 compute-0 python3.9[230512]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Feb 24 15:40:48 compute-0 sudo[230509]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:48 compute-0 sudo[230674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgahfeotdaxueaequzansgvbeltzkfmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947648.565164-750-149530217547170/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:48 compute-0 sudo[230674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:49 compute-0 python3.9[230677]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:49 compute-0 systemd[1]: Started libpod-conmon-0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a.scope.
Feb 24 15:40:49 compute-0 podman[230678]: 2026-02-24 15:40:49.308602328 +0000 UTC m=+0.115295740 container exec 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:40:49 compute-0 podman[230678]: 2026-02-24 15:40:49.343353232 +0000 UTC m=+0.150046654 container exec_died 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:40:49 compute-0 systemd[1]: libpod-conmon-0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a.scope: Deactivated successfully.
Feb 24 15:40:49 compute-0 sudo[230674]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:50 compute-0 sudo[230860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elczvnkrlympeaidsvwqqdcvhhslxsyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947649.656904-758-127164505396972/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:50 compute-0 sudo[230860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:50 compute-0 python3.9[230863]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:50 compute-0 systemd[1]: Started libpod-conmon-0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a.scope.
Feb 24 15:40:50 compute-0 podman[230864]: 2026-02-24 15:40:50.440341778 +0000 UTC m=+0.139795067 container exec 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:40:50 compute-0 podman[230864]: 2026-02-24 15:40:50.473506506 +0000 UTC m=+0.172959765 container exec_died 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:40:50 compute-0 systemd[1]: libpod-conmon-0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a.scope: Deactivated successfully.
Feb 24 15:40:50 compute-0 sudo[230860]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:51 compute-0 sudo[231044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuenlhwrcgrnstmnlieliwwakvxmqavv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947650.8054347-766-166106868493953/AnsiballZ_file.py'
Feb 24 15:40:51 compute-0 sudo[231044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:51 compute-0 python3.9[231047]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:51 compute-0 sudo[231044]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:52 compute-0 sudo[231197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihlalqbxuaikwrufhvsjvhmawrvkbeos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947651.7891772-775-179167658880136/AnsiballZ_podman_container_info.py'
Feb 24 15:40:52 compute-0 sudo[231197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:52 compute-0 python3.9[231200]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Feb 24 15:40:52 compute-0 sudo[231197]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:53 compute-0 sudo[231363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smbnxpwkvcaqordehkczibojnfdobmjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947652.8357925-783-81938267743988/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:53 compute-0 sudo[231363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:53 compute-0 python3.9[231366]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:53 compute-0 systemd[1]: Started libpod-conmon-4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9.scope.
Feb 24 15:40:53 compute-0 podman[231367]: 2026-02-24 15:40:53.616230421 +0000 UTC m=+0.113125639 container exec 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 15:40:53 compute-0 podman[231367]: 2026-02-24 15:40:53.648940497 +0000 UTC m=+0.145835675 container exec_died 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:40:53 compute-0 sudo[231363]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:53 compute-0 systemd[1]: libpod-conmon-4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9.scope: Deactivated successfully.
Feb 24 15:40:54 compute-0 sudo[231546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csegroouwiwlvcnijhdrkgbgndocunjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947653.9209204-791-230013548978939/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:54 compute-0 sudo[231546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:54 compute-0 python3.9[231549]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:54 compute-0 systemd[1]: Started libpod-conmon-4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9.scope.
Feb 24 15:40:54 compute-0 podman[231550]: 2026-02-24 15:40:54.724798002 +0000 UTC m=+0.111671630 container exec 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:40:54 compute-0 podman[231550]: 2026-02-24 15:40:54.759901135 +0000 UTC m=+0.146774693 container exec_died 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 15:40:54 compute-0 systemd[1]: libpod-conmon-4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9.scope: Deactivated successfully.
Feb 24 15:40:54 compute-0 sudo[231546]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:55 compute-0 sudo[231728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttffggdkqklxjxttvbxbjwzwpmypzqbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947655.0842626-799-83049511056779/AnsiballZ_file.py'
Feb 24 15:40:55 compute-0 sudo[231728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:40:55.694 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:40:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:40:55.695 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:40:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:40:55.695 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:40:55 compute-0 python3.9[231731]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:55 compute-0 sudo[231728]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:56 compute-0 sudo[231881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yshkkwxovkkuyutoxaewifigttbcucig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947656.0644715-808-526513548136/AnsiballZ_podman_container_info.py'
Feb 24 15:40:56 compute-0 sudo[231881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:56 compute-0 python3.9[231884]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Feb 24 15:40:56 compute-0 sudo[231881]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:57 compute-0 sudo[232046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plgumiogiknyrcbdoydvmcrkjvkmdsoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947657.0233808-816-93452185844620/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:57 compute-0 sudo[232046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:57 compute-0 python3.9[232049]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:57 compute-0 systemd[1]: Started libpod-conmon-e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a.scope.
Feb 24 15:40:57 compute-0 podman[232050]: 2026-02-24 15:40:57.81724656 +0000 UTC m=+0.118403777 container exec e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, container_name=openstack_network_exporter, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.7, managed_by=edpm_ansible)
Feb 24 15:40:57 compute-0 podman[232050]: 2026-02-24 15:40:57.850875247 +0000 UTC m=+0.152032414 container exec_died e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Feb 24 15:40:57 compute-0 systemd[1]: libpod-conmon-e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a.scope: Deactivated successfully.
Feb 24 15:40:57 compute-0 sudo[232046]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:58 compute-0 sudo[232227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaxbjfpaqyfsntxvomtmcqtvygvanckw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947658.147692-824-35657317007884/AnsiballZ_podman_container_exec.py'
Feb 24 15:40:58 compute-0 sudo[232227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:58 compute-0 python3.9[232230]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:40:58 compute-0 systemd[1]: Started libpod-conmon-e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a.scope.
Feb 24 15:40:58 compute-0 podman[232231]: 2026-02-24 15:40:58.841217637 +0000 UTC m=+0.105550006 container exec e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.7, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Feb 24 15:40:58 compute-0 podman[232231]: 2026-02-24 15:40:58.875796961 +0000 UTC m=+0.140129280 container exec_died e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., version=9.7, vcs-type=git, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Feb 24 15:40:58 compute-0 systemd[1]: libpod-conmon-e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a.scope: Deactivated successfully.
Feb 24 15:40:58 compute-0 sudo[232227]: pam_unix(sudo:session): session closed for user root
Feb 24 15:40:58 compute-0 podman[232248]: 2026-02-24 15:40:58.956878406 +0000 UTC m=+0.109056354 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:40:59 compute-0 sudo[232433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuodrdkdncjivplrpdegvcfbpyuwlyqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947659.1941245-832-48474652757086/AnsiballZ_file.py'
Feb 24 15:40:59 compute-0 sudo[232433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:40:59 compute-0 podman[204685]: time="2026-02-24T15:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:40:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 24 15:40:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3840 "" "Go-http-client/1.1"
Feb 24 15:40:59 compute-0 python3.9[232436]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:40:59 compute-0 sudo[232433]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:00 compute-0 sudo[232603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlfjqcuzhgnwipzfiofbjfdlarimolfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947660.173768-841-231689103275063/AnsiballZ_podman_container_info.py'
Feb 24 15:41:00 compute-0 sudo[232603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:00 compute-0 podman[232560]: 2026-02-24 15:41:00.647734517 +0000 UTC m=+0.110751902 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:41:00 compute-0 python3.9[232608]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Feb 24 15:41:00 compute-0 sudo[232603]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:01 compute-0 openstack_network_exporter[207830]: ERROR   15:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:41:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:41:01 compute-0 openstack_network_exporter[207830]: ERROR   15:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:41:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:41:01 compute-0 sudo[232782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdcfulhdaxjwnervvxlkgkzyrcbzjaij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947661.2528532-849-1419772578868/AnsiballZ_podman_container_exec.py'
Feb 24 15:41:01 compute-0 sudo[232782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:01 compute-0 podman[232745]: 2026-02-24 15:41:01.750451374 +0000 UTC m=+0.137817175 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 15:41:01 compute-0 systemd[1]: e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733-51eab3f522c421a5.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 15:41:01 compute-0 systemd[1]: e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733-51eab3f522c421a5.service: Failed with result 'exit-code'.
Feb 24 15:41:01 compute-0 python3.9[232789]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:41:02 compute-0 systemd[1]: Started libpod-conmon-e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733.scope.
Feb 24 15:41:02 compute-0 podman[232792]: 2026-02-24 15:41:02.052144866 +0000 UTC m=+0.137659770 container exec e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 15:41:02 compute-0 podman[232792]: 2026-02-24 15:41:02.085834616 +0000 UTC m=+0.171349480 container exec_died e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 15:41:02 compute-0 sudo[232782]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:02 compute-0 systemd[1]: libpod-conmon-e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733.scope: Deactivated successfully.
Feb 24 15:41:02 compute-0 sudo[232970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuwuuhteoybhtkmroqpjcmzyjssvsihy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947662.484008-857-278141014386473/AnsiballZ_podman_container_exec.py'
Feb 24 15:41:02 compute-0 sudo[232970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:03 compute-0 podman[232972]: 2026-02-24 15:41:03.035489939 +0000 UTC m=+0.109126246 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., config_id=kepler, build-date=2024-09-18T21:23:30, name=ubi9, io.buildah.version=1.29.0, release=1214.1726694543)
Feb 24 15:41:03 compute-0 python3.9[232977]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:41:03 compute-0 systemd[1]: Started libpod-conmon-e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733.scope.
Feb 24 15:41:03 compute-0 podman[232996]: 2026-02-24 15:41:03.311731715 +0000 UTC m=+0.151952354 container exec e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:41:03 compute-0 podman[232996]: 2026-02-24 15:41:03.345054903 +0000 UTC m=+0.185275502 container exec_died e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Feb 24 15:41:03 compute-0 systemd[1]: libpod-conmon-e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733.scope: Deactivated successfully.
Feb 24 15:41:03 compute-0 sudo[232970]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:04 compute-0 sudo[233174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkvtgruwftdhfcscvaxarvflfuungmxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947663.6354918-865-157807607924294/AnsiballZ_file.py'
Feb 24 15:41:04 compute-0 sudo[233174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:04 compute-0 python3.9[233177]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:04 compute-0 sudo[233174]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:05 compute-0 sudo[233340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eljexromxyxwvbnqjvwoxboulptreizt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947664.617575-874-7269498193009/AnsiballZ_podman_container_info.py'
Feb 24 15:41:05 compute-0 sudo[233340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:05 compute-0 podman[233301]: 2026-02-24 15:41:05.087842719 +0000 UTC m=+0.133931426 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, managed_by=edpm_ansible, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 24 15:41:05 compute-0 python3.9[233349]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Feb 24 15:41:05 compute-0 sudo[233340]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:06 compute-0 sudo[233512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erixzuibdcyfhmwgohybeojkdmfvihjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947665.681993-882-67052870129142/AnsiballZ_podman_container_exec.py'
Feb 24 15:41:06 compute-0 sudo[233512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:06 compute-0 python3.9[233515]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:41:06 compute-0 systemd[1]: Started libpod-conmon-106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561.scope.
Feb 24 15:41:06 compute-0 podman[233516]: 2026-02-24 15:41:06.451394806 +0000 UTC m=+0.132254158 container exec 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=kepler, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release=1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release-0.7.12=, io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 24 15:41:06 compute-0 podman[233516]: 2026-02-24 15:41:06.486491125 +0000 UTC m=+0.167350467 container exec_died 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_id=kepler, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Feb 24 15:41:06 compute-0 systemd[1]: libpod-conmon-106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561.scope: Deactivated successfully.
Feb 24 15:41:06 compute-0 sudo[233512]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:07 compute-0 sudo[233696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fectdyzpdobgivlvwomuuwopuuqcfgff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947666.769613-890-23019640690811/AnsiballZ_podman_container_exec.py'
Feb 24 15:41:07 compute-0 sudo[233696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:07 compute-0 python3.9[233699]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 24 15:41:07 compute-0 systemd[1]: Started libpod-conmon-106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561.scope.
Feb 24 15:41:07 compute-0 podman[233700]: 2026-02-24 15:41:07.598327968 +0000 UTC m=+0.138925276 container exec 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 24 15:41:07 compute-0 podman[233700]: 2026-02-24 15:41:07.632594685 +0000 UTC m=+0.173192003 container exec_died 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Feb 24 15:41:07 compute-0 systemd[1]: libpod-conmon-106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561.scope: Deactivated successfully.
Feb 24 15:41:07 compute-0 sudo[233696]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:08 compute-0 sudo[233879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwrveeohfrkotfucurptkfuzgyfbyehd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947667.9643686-898-91423771400022/AnsiballZ_file.py'
Feb 24 15:41:08 compute-0 sudo[233879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:08 compute-0 python3.9[233882]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:08 compute-0 sudo[233879]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:09 compute-0 sudo[234032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poxksprbraeeqmfvyvkslhcjcmgrmcey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947668.9932015-907-201679796026436/AnsiballZ_file.py'
Feb 24 15:41:09 compute-0 sudo[234032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:09 compute-0 python3.9[234035]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:09 compute-0 sudo[234032]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:10 compute-0 sudo[234185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trojijshncyydpkwggjzmcckgsrgfnhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947669.8181942-915-243032166783937/AnsiballZ_stat.py'
Feb 24 15:41:10 compute-0 sudo[234185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:10 compute-0 python3.9[234188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:10 compute-0 sudo[234185]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:10 compute-0 podman[234189]: 2026-02-24 15:41:10.497496292 +0000 UTC m=+0.099493685 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 24 15:41:10 compute-0 podman[234190]: 2026-02-24 15:41:10.531090969 +0000 UTC m=+0.121531856 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 24 15:41:10 compute-0 sudo[234352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbtbmsibyyvioykfxjranyjcibsvxolk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947669.8181942-915-243032166783937/AnsiballZ_copy.py'
Feb 24 15:41:10 compute-0 sudo[234352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:11 compute-0 python3.9[234355]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771947669.8181942-915-243032166783937/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:11 compute-0 sudo[234352]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:11 compute-0 sudo[234505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfmesbfxrpkpgzvgflimpzwbrtealcig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947671.433254-931-281210555856562/AnsiballZ_file.py'
Feb 24 15:41:11 compute-0 sudo[234505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:12 compute-0 python3.9[234508]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:12 compute-0 sudo[234505]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:12 compute-0 sudo[234658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elysgpebbswvccwhayxdwheznlxmvhyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947672.3380642-939-73818951473528/AnsiballZ_stat.py'
Feb 24 15:41:12 compute-0 sudo[234658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:12 compute-0 python3.9[234661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:13 compute-0 sudo[234658]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:13 compute-0 sudo[234737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnzpjkmqvijnmljjyvtxkuawqrkvxgok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947672.3380642-939-73818951473528/AnsiballZ_file.py'
Feb 24 15:41:13 compute-0 sudo[234737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:13 compute-0 python3.9[234740]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:13 compute-0 sudo[234737]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:14 compute-0 sudo[234890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsbhuslzhqopwioomltuwaixhrbirboq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947673.827266-951-18566246983732/AnsiballZ_stat.py'
Feb 24 15:41:14 compute-0 sudo[234890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:14 compute-0 sshd-session[234894]: Connection closed by authenticating user root 64.236.161.24 port 46112 [preauth]
Feb 24 15:41:14 compute-0 python3.9[234893]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:14 compute-0 sudo[234890]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:14 compute-0 sudo[234971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uspwsjfvlzfevgikbcqdgfmsxookxorg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947673.827266-951-18566246983732/AnsiballZ_file.py'
Feb 24 15:41:14 compute-0 sudo[234971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:15 compute-0 python3.9[234974]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.a_53vmjx recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:15 compute-0 sudo[234971]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:15 compute-0 sudo[235124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmbduacillqonfbfszdaznsqaqcvsrty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947675.3702664-963-70125181117439/AnsiballZ_stat.py'
Feb 24 15:41:15 compute-0 sudo[235124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:15 compute-0 python3.9[235127]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:15 compute-0 sudo[235124]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:16 compute-0 sudo[235203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgaskrhqafyzymmmhignuzrvpawgjlre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947675.3702664-963-70125181117439/AnsiballZ_file.py'
Feb 24 15:41:16 compute-0 sudo[235203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:16 compute-0 podman[235205]: 2026-02-24 15:41:16.433436715 +0000 UTC m=+0.113703195 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:41:16 compute-0 python3.9[235212]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:16 compute-0 sudo[235203]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:17 compute-0 sudo[235380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nalxhdbbbvpgrwmnktethcyvbbuxdlwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947676.8376892-976-140455779918143/AnsiballZ_command.py'
Feb 24 15:41:17 compute-0 sudo[235380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:17 compute-0 python3.9[235383]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:41:17 compute-0 sudo[235380]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:18 compute-0 sudo[235534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjfzkvcmncpzbwjtazlrvelvkrgbbey ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947677.7808135-984-90392057658259/AnsiballZ_edpm_nftables_from_files.py'
Feb 24 15:41:18 compute-0 sudo[235534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:18 compute-0 python3[235537]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 24 15:41:18 compute-0 sudo[235534]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:19 compute-0 sudo[235687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghhxihoznadzvmbcitwntgxlothgurhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947678.8922884-992-195191289767051/AnsiballZ_stat.py'
Feb 24 15:41:19 compute-0 sudo[235687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:19 compute-0 python3.9[235690]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:19 compute-0 sudo[235687]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:20 compute-0 sudo[235767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcvmkzmatfsegewygsmrfphkylnweknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947678.8922884-992-195191289767051/AnsiballZ_file.py'
Feb 24 15:41:20 compute-0 sudo[235767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:20 compute-0 python3.9[235770]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:20 compute-0 sudo[235767]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:21 compute-0 sudo[235920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehmnzzztajbpvgdtkjnlfktsblilarz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947680.4623325-1004-247809183948236/AnsiballZ_stat.py'
Feb 24 15:41:21 compute-0 sudo[235920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:21 compute-0 python3.9[235923]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:21 compute-0 sudo[235920]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:21 compute-0 sudo[235999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbjgdynmqmuucubhdkdmfablzywtrdmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947680.4623325-1004-247809183948236/AnsiballZ_file.py'
Feb 24 15:41:21 compute-0 sudo[235999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:21 compute-0 python3.9[236002]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:21 compute-0 sudo[235999]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:22 compute-0 sudo[236152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zunjxorevfnemiuwsfpqjvjegvykvqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947682.0741827-1016-135636197354550/AnsiballZ_stat.py'
Feb 24 15:41:22 compute-0 sudo[236152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:22 compute-0 python3.9[236155]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:22 compute-0 sudo[236152]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:23 compute-0 sudo[236231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjinkbuzcfguqikqmowoinysxhofpvpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947682.0741827-1016-135636197354550/AnsiballZ_file.py'
Feb 24 15:41:23 compute-0 sudo[236231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:23 compute-0 python3.9[236234]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:23 compute-0 sudo[236231]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:24 compute-0 sudo[236384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjxxdaqrwxuzculwcldlsppudyfdjduk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947683.627025-1028-243246649678295/AnsiballZ_stat.py'
Feb 24 15:41:24 compute-0 sudo[236384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:24 compute-0 python3.9[236387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:24 compute-0 sudo[236384]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:24 compute-0 sudo[236463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbgluvyestafkamzogdmwarzkxacryn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947683.627025-1028-243246649678295/AnsiballZ_file.py'
Feb 24 15:41:24 compute-0 sudo[236463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:24 compute-0 python3.9[236466]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:24 compute-0 sudo[236463]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:25 compute-0 sudo[236616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrhwkopppunwauocpitrzkasdhhgzlpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947685.1788034-1040-20604236492861/AnsiballZ_stat.py'
Feb 24 15:41:25 compute-0 sudo[236616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:25 compute-0 python3.9[236619]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:25 compute-0 sudo[236616]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:26 compute-0 sudo[236742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrnabnmiiigbylswbfecwvwmqzoswnhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947685.1788034-1040-20604236492861/AnsiballZ_copy.py'
Feb 24 15:41:26 compute-0 sudo[236742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:26 compute-0 python3.9[236745]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771947685.1788034-1040-20604236492861/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:26 compute-0 sudo[236742]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:27 compute-0 rsyslogd[1018]: imjournal: 3305 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 24 15:41:27 compute-0 sudo[236895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufucmbllbvcrwnoomudiohqwfrdputrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947687.1080525-1055-16220370179941/AnsiballZ_file.py'
Feb 24 15:41:27 compute-0 sudo[236895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:27 compute-0 python3.9[236898]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:27 compute-0 sudo[236895]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:28 compute-0 sudo[237048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfgziqqjdueoqlnwvfubzvlbdprvkigh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947688.0226548-1063-144824318631020/AnsiballZ_command.py'
Feb 24 15:41:28 compute-0 sudo[237048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:28 compute-0 python3.9[237051]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:41:28 compute-0 sudo[237048]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:29 compute-0 podman[237120]: 2026-02-24 15:41:29.138013828 +0000 UTC m=+0.094189565 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:41:29 compute-0 sudo[237227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epvkmugnnttuumjiszntymnrnyuoysca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947688.9617605-1071-251597688094279/AnsiballZ_blockinfile.py'
Feb 24 15:41:29 compute-0 sudo[237227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:29 compute-0 podman[204685]: time="2026-02-24T15:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:41:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28005 "" "Go-http-client/1.1"
Feb 24 15:41:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3847 "" "Go-http-client/1.1"
Feb 24 15:41:29 compute-0 python3.9[237230]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:29 compute-0 sudo[237227]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:30 compute-0 sudo[237380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsojazgewizkczipdnbehhxkwtppkmhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947690.2041588-1080-205663773886379/AnsiballZ_command.py'
Feb 24 15:41:30 compute-0 sudo[237380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:30 compute-0 python3.9[237383]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:41:30 compute-0 sudo[237380]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:30 compute-0 podman[237385]: 2026-02-24 15:41:30.973282899 +0000 UTC m=+0.087010223 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 15:41:31 compute-0 openstack_network_exporter[207830]: ERROR   15:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:41:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:41:31 compute-0 openstack_network_exporter[207830]: ERROR   15:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:41:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:41:31 compute-0 sudo[237553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyqjxdpnmvzffbmvljzljcbcpuzclrck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947691.090489-1088-65411570919279/AnsiballZ_stat.py'
Feb 24 15:41:31 compute-0 sudo[237553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:31 compute-0 python3.9[237556]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 24 15:41:31 compute-0 sudo[237553]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:31 compute-0 podman[237559]: 2026-02-24 15:41:31.907492946 +0000 UTC m=+0.102969043 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 24 15:41:32 compute-0 sudo[237726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvzntvsdogsdsmmcnreydzhbgsyszhvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947692.0447288-1096-239577669406065/AnsiballZ_command.py'
Feb 24 15:41:32 compute-0 sudo[237726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:32 compute-0 python3.9[237729]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:41:32 compute-0 sudo[237726]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:33 compute-0 sudo[237897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fugptmghnxozicvskccffdtdrhqxtqtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947692.9622095-1104-1786787232510/AnsiballZ_file.py'
Feb 24 15:41:33 compute-0 sudo[237897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:33 compute-0 podman[237856]: 2026-02-24 15:41:33.387584807 +0000 UTC m=+0.096935722 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, config_id=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=)
Feb 24 15:41:33 compute-0 python3.9[237905]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:33 compute-0 sudo[237897]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:34 compute-0 sshd-session[216691]: Connection closed by 192.168.122.30 port 57566
Feb 24 15:41:34 compute-0 sshd-session[216688]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:41:34 compute-0 systemd-logind[813]: Session 26 logged out. Waiting for processes to exit.
Feb 24 15:41:34 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Feb 24 15:41:34 compute-0 systemd[1]: session-26.scope: Consumed 1min 32.796s CPU time.
Feb 24 15:41:34 compute-0 systemd-logind[813]: Removed session 26.
Feb 24 15:41:36 compute-0 podman[237930]: 2026-02-24 15:41:36.141707043 +0000 UTC m=+0.091815208 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., release=1770267347, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Feb 24 15:41:37 compute-0 nova_compute[188703]: 2026-02-24 15:41:37.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:37 compute-0 nova_compute[188703]: 2026-02-24 15:41:37.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 15:41:37 compute-0 nova_compute[188703]: 2026-02-24 15:41:37.977 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 15:41:37 compute-0 nova_compute[188703]: 2026-02-24 15:41:37.978 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:37 compute-0 nova_compute[188703]: 2026-02-24 15:41:37.978 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 15:41:37 compute-0 nova_compute[188703]: 2026-02-24 15:41:37.994 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:39 compute-0 nova_compute[188703]: 2026-02-24 15:41:39.015 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:39 compute-0 sshd-session[237951]: Accepted publickey for zuul from 192.168.122.30 port 57022 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 15:41:39 compute-0 systemd-logind[813]: New session 27 of user zuul.
Feb 24 15:41:39 compute-0 systemd[1]: Started Session 27 of User zuul.
Feb 24 15:41:39 compute-0 sshd-session[237951]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.825 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.826 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.826 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.831 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.832 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.832 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.833 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.833 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.834 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.834 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.835 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.836 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.836 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.836 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.837 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.838 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.838 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.838 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.838 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.839 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'disk.device.write.latency': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.839 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'disk.device.write.latency': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'disk.device.write.latency': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.841 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260256c680>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'disk.device.write.latency': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.843 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.844 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.845 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.846 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.846 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.846 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.846 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.847 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.847 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.847 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:41:39.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:41:39 compute-0 nova_compute[188703]: 2026-02-24 15:41:39.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:39 compute-0 nova_compute[188703]: 2026-02-24 15:41:39.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:41:39 compute-0 nova_compute[188703]: 2026-02-24 15:41:39.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:41:39 compute-0 nova_compute[188703]: 2026-02-24 15:41:39.963 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 15:41:39 compute-0 nova_compute[188703]: 2026-02-24 15:41:39.964 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:40 compute-0 python3.9[238105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:41:40 compute-0 nova_compute[188703]: 2026-02-24 15:41:40.961 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:41 compute-0 podman[238139]: 2026-02-24 15:41:41.163436725 +0000 UTC m=+0.113143080 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Feb 24 15:41:41 compute-0 podman[238143]: 2026-02-24 15:41:41.204451971 +0000 UTC m=+0.154758473 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:41:41 compute-0 sudo[238301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqemyrhxbllxvaprhngfnkcdnvxrrxfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947701.0868495-29-33032957106880/AnsiballZ_systemd.py'
Feb 24 15:41:41 compute-0 sudo[238301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:41 compute-0 nova_compute[188703]: 2026-02-24 15:41:41.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:41 compute-0 nova_compute[188703]: 2026-02-24 15:41:41.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:41 compute-0 nova_compute[188703]: 2026-02-24 15:41:41.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:41:42 compute-0 python3.9[238304]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Feb 24 15:41:42 compute-0 sudo[238301]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:42 compute-0 sudo[238455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viswgyewakpcyexcblqfjckidhmpguwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947702.4252343-37-208457354321773/AnsiballZ_setup.py'
Feb 24 15:41:42 compute-0 sudo[238455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:42 compute-0 nova_compute[188703]: 2026-02-24 15:41:42.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:42 compute-0 nova_compute[188703]: 2026-02-24 15:41:42.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.080 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.081 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.081 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.081 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:41:43 compute-0 python3.9[238458]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.461 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.462 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5661MB free_disk=72.29197311401367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.463 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.463 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:41:43 compute-0 sudo[238455]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.648 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.648 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.767 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.880 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.880 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.899 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.932 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.958 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.977 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.979 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:41:43 compute-0 nova_compute[188703]: 2026-02-24 15:41:43.980 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:41:44 compute-0 sudo[238540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiakkknfckownjdtbcdclpqeojgenuaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947702.4252343-37-208457354321773/AnsiballZ_dnf.py'
Feb 24 15:41:44 compute-0 sudo[238540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:44 compute-0 python3.9[238543]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 24 15:41:44 compute-0 nova_compute[188703]: 2026-02-24 15:41:44.979 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:41:46 compute-0 sudo[238540]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:46 compute-0 podman[238550]: 2026-02-24 15:41:46.725413739 +0000 UTC m=+0.093241638 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 15:41:47 compute-0 sudo[238722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrawkujtmxdqbyibjbobpzgeslljrajx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947706.8404925-49-73162924272410/AnsiballZ_stat.py'
Feb 24 15:41:47 compute-0 sudo[238722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:47 compute-0 python3.9[238725]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:47 compute-0 sudo[238722]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:48 compute-0 sudo[238846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtekbjwqsgwgsijvrxgyedfmvecejkit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947706.8404925-49-73162924272410/AnsiballZ_copy.py'
Feb 24 15:41:48 compute-0 sudo[238846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:48 compute-0 python3.9[238849]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771947706.8404925-49-73162924272410/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:48 compute-0 sudo[238846]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:49 compute-0 sudo[238999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duwkukkfkpzjgpurzvmlndgvmrsnckov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947708.7990868-64-79270382587482/AnsiballZ_file.py'
Feb 24 15:41:49 compute-0 sudo[238999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:49 compute-0 python3.9[239002]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:49 compute-0 sudo[238999]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:50 compute-0 sudo[239153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-darqfjapgxogbxyuenpirehcbzmzbvym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947709.8184006-72-216906194997821/AnsiballZ_stat.py'
Feb 24 15:41:50 compute-0 sudo[239153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:50 compute-0 python3.9[239156]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 24 15:41:50 compute-0 sudo[239153]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:51 compute-0 sudo[239277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gglnifbicnctizspzdlwrsuktotvnzml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947709.8184006-72-216906194997821/AnsiballZ_copy.py'
Feb 24 15:41:51 compute-0 sudo[239277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:51 compute-0 python3.9[239280]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771947709.8184006-72-216906194997821/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 24 15:41:51 compute-0 sudo[239277]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:51 compute-0 sudo[239430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mikhefwyvtqtjiwbdsaeqfeblcznjzah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771947711.5659983-87-151421615834183/AnsiballZ_systemd.py'
Feb 24 15:41:51 compute-0 sudo[239430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:41:52 compute-0 python3.9[239433]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 24 15:41:52 compute-0 systemd[1]: Stopping System Logging Service...
Feb 24 15:41:52 compute-0 rsyslogd[1018]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1018" x-info="https://www.rsyslog.com"] exiting on signal 15.
Feb 24 15:41:52 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Feb 24 15:41:52 compute-0 systemd[1]: Stopped System Logging Service.
Feb 24 15:41:52 compute-0 systemd[1]: rsyslog.service: Consumed 3.983s CPU time, 9.8M memory peak, read 0B from disk, written 5.2M to disk.
Feb 24 15:41:52 compute-0 systemd[1]: Starting System Logging Service...
Feb 24 15:41:52 compute-0 rsyslogd[239437]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="239437" x-info="https://www.rsyslog.com"] start
Feb 24 15:41:52 compute-0 systemd[1]: Started System Logging Service.
Feb 24 15:41:52 compute-0 rsyslogd[239437]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 15:41:52 compute-0 rsyslogd[239437]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Feb 24 15:41:52 compute-0 rsyslogd[239437]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Feb 24 15:41:52 compute-0 rsyslogd[239437]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Feb 24 15:41:52 compute-0 sudo[239430]: pam_unix(sudo:session): session closed for user root
Feb 24 15:41:52 compute-0 rsyslogd[239437]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Feb 24 15:41:53 compute-0 sshd-session[237954]: Connection closed by 192.168.122.30 port 57022
Feb 24 15:41:53 compute-0 sshd-session[237951]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:41:53 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Feb 24 15:41:53 compute-0 systemd[1]: session-27.scope: Consumed 10.460s CPU time.
Feb 24 15:41:53 compute-0 systemd-logind[813]: Session 27 logged out. Waiting for processes to exit.
Feb 24 15:41:53 compute-0 systemd-logind[813]: Removed session 27.
Feb 24 15:41:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:41:55.695 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:41:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:41:55.696 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:41:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:41:55.696 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:41:59 compute-0 podman[204685]: time="2026-02-24T15:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:41:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 15:41:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3852 "" "Go-http-client/1.1"
Feb 24 15:42:00 compute-0 podman[239466]: 2026-02-24 15:42:00.142658967 +0000 UTC m=+0.102485139 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:42:01 compute-0 podman[239489]: 2026-02-24 15:42:01.155663391 +0000 UTC m=+0.110912630 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 15:42:01 compute-0 openstack_network_exporter[207830]: ERROR   15:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:42:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:42:01 compute-0 openstack_network_exporter[207830]: ERROR   15:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:42:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:42:02 compute-0 podman[239509]: 2026-02-24 15:42:02.145484315 +0000 UTC m=+0.103452978 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 15:42:04 compute-0 podman[239529]: 2026-02-24 15:42:04.179614505 +0000 UTC m=+0.129691085 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, name=ubi9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.expose-services=, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Feb 24 15:42:07 compute-0 podman[239548]: 2026-02-24 15:42:07.205436073 +0000 UTC m=+0.156184179 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., version=9.7, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9)
Feb 24 15:42:12 compute-0 podman[239569]: 2026-02-24 15:42:12.141936425 +0000 UTC m=+0.096246803 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 15:42:12 compute-0 podman[239570]: 2026-02-24 15:42:12.225259558 +0000 UTC m=+0.178594228 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 24 15:42:17 compute-0 podman[239616]: 2026-02-24 15:42:17.176348615 +0000 UTC m=+0.117765347 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:42:17 compute-0 sshd-session[239640]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 15:42:20 compute-0 sshd-session[239643]: Accepted publickey for zuul from 38.102.83.66 port 58518 ssh2: RSA SHA256:NJTfdsSIVB6mH9/ClrbKw1e6GvsHFWYkptASszhoj5w
Feb 24 15:42:20 compute-0 systemd-logind[813]: New session 28 of user zuul.
Feb 24 15:42:20 compute-0 systemd[1]: Started Session 28 of User zuul.
Feb 24 15:42:20 compute-0 sshd-session[239643]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:42:22 compute-0 python3[239820]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:42:23 compute-0 sudo[240041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtkgflaqjpkttqdnwhrsimfrlbffzgqw ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947743.4515069-37513-167152357424634/AnsiballZ_command.py'
Feb 24 15:42:23 compute-0 sudo[240041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:42:24 compute-0 python3[240044]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:42:24 compute-0 sudo[240041]: pam_unix(sudo:session): session closed for user root
Feb 24 15:42:25 compute-0 sudo[240195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xayqcmcnoxiicglvjorebwtvfovgrodc ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947744.6077158-37524-107489790899227/AnsiballZ_command.py'
Feb 24 15:42:25 compute-0 sudo[240195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:42:25 compute-0 python3[240198]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:42:26 compute-0 sudo[240195]: pam_unix(sudo:session): session closed for user root
Feb 24 15:42:28 compute-0 python3[240349]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 24 15:42:29 compute-0 sudo[240500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdycwxaphkhgtmuugcahyfwxivaiaegq ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947748.7109046-37570-156197693345186/AnsiballZ_setup.py'
Feb 24 15:42:29 compute-0 sudo[240500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:42:29 compute-0 python3[240503]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 24 15:42:29 compute-0 podman[204685]: time="2026-02-24T15:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:42:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 15:42:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3878 "" "Go-http-client/1.1"
Feb 24 15:42:30 compute-0 sudo[240500]: pam_unix(sudo:session): session closed for user root
Feb 24 15:42:30 compute-0 podman[240578]: 2026-02-24 15:42:30.686157888 +0000 UTC m=+0.093855075 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:42:31 compute-0 openstack_network_exporter[207830]: ERROR   15:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:42:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:42:31 compute-0 openstack_network_exporter[207830]: ERROR   15:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:42:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:42:31 compute-0 sudo[240759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coexzlvtykkcxgttgysiflsyhqgejcgj ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947751.2365057-37601-169704994680226/AnsiballZ_command.py'
Feb 24 15:42:31 compute-0 sudo[240759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:42:31 compute-0 podman[240723]: 2026-02-24 15:42:31.716533477 +0000 UTC m=+0.125976249 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 24 15:42:31 compute-0 python3[240768]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:42:31 compute-0 sudo[240759]: pam_unix(sudo:session): session closed for user root
Feb 24 15:42:32 compute-0 sudo[240945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmdrlqhhmfwnkzasuuhvhvctkhqverts ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771947752.3604136-37618-151778445327127/AnsiballZ_command.py'
Feb 24 15:42:32 compute-0 sudo[240945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:42:32 compute-0 podman[240907]: 2026-02-24 15:42:32.862452858 +0000 UTC m=+0.121744709 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 15:42:32 compute-0 python3[240952]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:42:33 compute-0 sudo[240945]: pam_unix(sudo:session): session closed for user root
Feb 24 15:42:35 compute-0 podman[240996]: 2026-02-24 15:42:35.151863888 +0000 UTC m=+0.104703375 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 24 15:42:38 compute-0 podman[241015]: 2026-02-24 15:42:38.151236883 +0000 UTC m=+0.098590229 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible)
Feb 24 15:42:38 compute-0 nova_compute[188703]: 2026-02-24 15:42:38.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:40 compute-0 nova_compute[188703]: 2026-02-24 15:42:40.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:40 compute-0 nova_compute[188703]: 2026-02-24 15:42:40.956 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:40 compute-0 nova_compute[188703]: 2026-02-24 15:42:40.957 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:42:40 compute-0 nova_compute[188703]: 2026-02-24 15:42:40.957 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:42:40 compute-0 nova_compute[188703]: 2026-02-24 15:42:40.974 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 15:42:40 compute-0 nova_compute[188703]: 2026-02-24 15:42:40.975 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:41 compute-0 nova_compute[188703]: 2026-02-24 15:42:41.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:42 compute-0 nova_compute[188703]: 2026-02-24 15:42:42.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:43 compute-0 podman[241036]: 2026-02-24 15:42:43.131237923 +0000 UTC m=+0.085309781 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Feb 24 15:42:43 compute-0 podman[241037]: 2026-02-24 15:42:43.166526888 +0000 UTC m=+0.114487773 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 15:42:43 compute-0 nova_compute[188703]: 2026-02-24 15:42:43.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:43 compute-0 nova_compute[188703]: 2026-02-24 15:42:43.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:43 compute-0 nova_compute[188703]: 2026-02-24 15:42:43.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:42:43 compute-0 nova_compute[188703]: 2026-02-24 15:42:43.945 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:43 compute-0 nova_compute[188703]: 2026-02-24 15:42:43.994 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:42:43 compute-0 nova_compute[188703]: 2026-02-24 15:42:43.995 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:42:43 compute-0 nova_compute[188703]: 2026-02-24 15:42:43.995 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:42:43 compute-0 nova_compute[188703]: 2026-02-24 15:42:43.996 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.457 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.458 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5696MB free_disk=72.29248428344727GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.459 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.459 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.544 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.545 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.583 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.604 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.606 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:42:44 compute-0 nova_compute[188703]: 2026-02-24 15:42:44.607 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:42:45 compute-0 nova_compute[188703]: 2026-02-24 15:42:45.607 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:42:48 compute-0 podman[241078]: 2026-02-24 15:42:48.17359399 +0000 UTC m=+0.123804817 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:42:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:42:55.696 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:42:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:42:55.697 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:42:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:42:55.698 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:42:59 compute-0 podman[204685]: time="2026-02-24T15:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:42:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 15:42:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3871 "" "Go-http-client/1.1"
Feb 24 15:43:01 compute-0 podman[241101]: 2026-02-24 15:43:01.157943687 +0000 UTC m=+0.119559267 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:43:01 compute-0 openstack_network_exporter[207830]: ERROR   15:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:43:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:43:01 compute-0 openstack_network_exporter[207830]: ERROR   15:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:43:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:43:02 compute-0 podman[241124]: 2026-02-24 15:43:02.156034197 +0000 UTC m=+0.115135942 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 24 15:43:03 compute-0 podman[241142]: 2026-02-24 15:43:03.166855989 +0000 UTC m=+0.123534610 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Feb 24 15:43:06 compute-0 podman[241161]: 2026-02-24 15:43:06.160322058 +0000 UTC m=+0.121070588 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, distribution-scope=public)
Feb 24 15:43:09 compute-0 podman[241181]: 2026-02-24 15:43:09.162330276 +0000 UTC m=+0.120851613 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1770267347, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container)
Feb 24 15:43:14 compute-0 podman[241203]: 2026-02-24 15:43:14.159960756 +0000 UTC m=+0.109154845 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 15:43:14 compute-0 podman[241204]: 2026-02-24 15:43:14.216217146 +0000 UTC m=+0.156739384 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 24 15:43:19 compute-0 podman[241250]: 2026-02-24 15:43:19.163254928 +0000 UTC m=+0.109084834 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:43:29 compute-0 podman[204685]: time="2026-02-24T15:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:43:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 15:43:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3878 "" "Go-http-client/1.1"
Feb 24 15:43:31 compute-0 openstack_network_exporter[207830]: ERROR   15:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:43:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:43:31 compute-0 openstack_network_exporter[207830]: ERROR   15:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:43:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:43:32 compute-0 podman[241275]: 2026-02-24 15:43:32.141626712 +0000 UTC m=+0.096772791 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:43:32 compute-0 sshd-session[239646]: Received disconnect from 38.102.83.66 port 58518:11: disconnected by user
Feb 24 15:43:32 compute-0 sshd-session[239646]: Disconnected from user zuul 38.102.83.66 port 58518
Feb 24 15:43:32 compute-0 sshd-session[239643]: pam_unix(sshd:session): session closed for user zuul
Feb 24 15:43:32 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Feb 24 15:43:32 compute-0 systemd[1]: session-28.scope: Consumed 9.882s CPU time.
Feb 24 15:43:32 compute-0 systemd-logind[813]: Session 28 logged out. Waiting for processes to exit.
Feb 24 15:43:32 compute-0 systemd-logind[813]: Removed session 28.
Feb 24 15:43:32 compute-0 podman[241299]: 2026-02-24 15:43:32.713969797 +0000 UTC m=+0.122217000 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 15:43:34 compute-0 podman[241318]: 2026-02-24 15:43:34.147739683 +0000 UTC m=+0.098006595 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 15:43:37 compute-0 podman[241338]: 2026-02-24 15:43:37.174608021 +0000 UTC m=+0.131767137 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-type=git, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, config_id=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 24 15:43:38 compute-0 sshd-session[241357]: Connection closed by authenticating user root 52.159.244.83 port 2072 [preauth]
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.826 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.827 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.827 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.835 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.836 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.837 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.839 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.841 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:43:39.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:43:39 compute-0 nova_compute[188703]: 2026-02-24 15:43:39.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:43:40 compute-0 podman[241360]: 2026-02-24 15:43:40.150035613 +0000 UTC m=+0.095814704 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, release=1770267347, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Feb 24 15:43:40 compute-0 nova_compute[188703]: 2026-02-24 15:43:40.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:43:41 compute-0 nova_compute[188703]: 2026-02-24 15:43:41.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:43:42 compute-0 nova_compute[188703]: 2026-02-24 15:43:42.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:43:42 compute-0 nova_compute[188703]: 2026-02-24 15:43:42.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:43:42 compute-0 nova_compute[188703]: 2026-02-24 15:43:42.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:43:42 compute-0 nova_compute[188703]: 2026-02-24 15:43:42.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:43:42 compute-0 nova_compute[188703]: 2026-02-24 15:43:42.966 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 15:43:43 compute-0 nova_compute[188703]: 2026-02-24 15:43:43.945 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:43:44 compute-0 podman[241382]: 2026-02-24 15:43:44.836443416 +0000 UTC m=+0.136841417 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute)
Feb 24 15:43:44 compute-0 podman[241383]: 2026-02-24 15:43:44.864640114 +0000 UTC m=+0.167604277 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 24 15:43:45 compute-0 nova_compute[188703]: 2026-02-24 15:43:45.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:43:45 compute-0 nova_compute[188703]: 2026-02-24 15:43:45.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:43:45 compute-0 nova_compute[188703]: 2026-02-24 15:43:45.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:43:45 compute-0 nova_compute[188703]: 2026-02-24 15:43:45.985 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:43:45 compute-0 nova_compute[188703]: 2026-02-24 15:43:45.986 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:43:45 compute-0 nova_compute[188703]: 2026-02-24 15:43:45.987 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:43:45 compute-0 nova_compute[188703]: 2026-02-24 15:43:45.987 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.491 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.492 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5712MB free_disk=72.2925033569336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.493 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.493 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.567 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.568 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.595 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.611 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.614 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:43:46 compute-0 nova_compute[188703]: 2026-02-24 15:43:46.615 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:43:47 compute-0 nova_compute[188703]: 2026-02-24 15:43:47.616 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:43:50 compute-0 podman[241426]: 2026-02-24 15:43:50.155626438 +0000 UTC m=+0.120002868 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:43:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:43:55.698 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:43:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:43:55.699 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:43:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:43:55.699 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:43:59 compute-0 podman[204685]: time="2026-02-24T15:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:43:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 15:43:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3876 "" "Go-http-client/1.1"
Feb 24 15:44:01 compute-0 openstack_network_exporter[207830]: ERROR   15:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:44:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:44:01 compute-0 openstack_network_exporter[207830]: ERROR   15:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:44:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:44:03 compute-0 podman[241450]: 2026-02-24 15:44:03.180746457 +0000 UTC m=+0.126845179 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:44:03 compute-0 podman[241451]: 2026-02-24 15:44:03.199449559 +0000 UTC m=+0.141510529 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 15:44:05 compute-0 podman[241492]: 2026-02-24 15:44:05.17006452 +0000 UTC m=+0.123770094 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 24 15:44:08 compute-0 podman[241511]: 2026-02-24 15:44:08.163002032 +0000 UTC m=+0.120533953 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, name=ubi9, vendor=Red Hat, Inc., version=9.4, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 24 15:44:11 compute-0 podman[241531]: 2026-02-24 15:44:11.147834495 +0000 UTC m=+0.100331936 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.7, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1770267347, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 24 15:44:15 compute-0 podman[241551]: 2026-02-24 15:44:15.182481088 +0000 UTC m=+0.134681632 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 15:44:15 compute-0 podman[241552]: 2026-02-24 15:44:15.193861145 +0000 UTC m=+0.147574091 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true)
Feb 24 15:44:16 compute-0 sshd-session[241596]: Connection closed by authenticating user root 172.214.45.193 port 24584 [preauth]
Feb 24 15:44:21 compute-0 podman[241599]: 2026-02-24 15:44:21.141571049 +0000 UTC m=+0.097924109 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:44:26 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:44:26.566 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:44:26 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:44:26.568 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 15:44:26 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:44:26.570 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:44:29 compute-0 podman[204685]: time="2026-02-24T15:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:44:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 15:44:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3880 "" "Go-http-client/1.1"
Feb 24 15:44:31 compute-0 openstack_network_exporter[207830]: ERROR   15:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:44:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:44:31 compute-0 openstack_network_exporter[207830]: ERROR   15:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:44:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:44:34 compute-0 podman[241623]: 2026-02-24 15:44:34.159406485 +0000 UTC m=+0.116394953 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:44:34 compute-0 podman[241624]: 2026-02-24 15:44:34.160415793 +0000 UTC m=+0.112838964 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 24 15:44:36 compute-0 podman[241665]: 2026-02-24 15:44:36.152007303 +0000 UTC m=+0.107023503 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:44:39 compute-0 podman[241683]: 2026-02-24 15:44:39.16055501 +0000 UTC m=+0.123349257 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vcs-type=git, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, vendor=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9)
Feb 24 15:44:40 compute-0 nova_compute[188703]: 2026-02-24 15:44:40.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:41 compute-0 nova_compute[188703]: 2026-02-24 15:44:41.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:42 compute-0 podman[241702]: 2026-02-24 15:44:42.126891985 +0000 UTC m=+0.082964032 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter)
Feb 24 15:44:43 compute-0 nova_compute[188703]: 2026-02-24 15:44:43.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:43 compute-0 nova_compute[188703]: 2026-02-24 15:44:43.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:44:43 compute-0 nova_compute[188703]: 2026-02-24 15:44:43.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:44:43 compute-0 nova_compute[188703]: 2026-02-24 15:44:43.956 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 15:44:43 compute-0 nova_compute[188703]: 2026-02-24 15:44:43.956 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:44 compute-0 nova_compute[188703]: 2026-02-24 15:44:44.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:44 compute-0 nova_compute[188703]: 2026-02-24 15:44:44.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:45 compute-0 nova_compute[188703]: 2026-02-24 15:44:45.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:45 compute-0 nova_compute[188703]: 2026-02-24 15:44:45.953 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:45 compute-0 nova_compute[188703]: 2026-02-24 15:44:45.953 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:44:46 compute-0 podman[241723]: 2026-02-24 15:44:46.144535182 +0000 UTC m=+0.100061218 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 15:44:46 compute-0 podman[241724]: 2026-02-24 15:44:46.189039642 +0000 UTC m=+0.142433189 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 24 15:44:46 compute-0 nova_compute[188703]: 2026-02-24 15:44:46.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:46 compute-0 nova_compute[188703]: 2026-02-24 15:44:46.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:44:46 compute-0 nova_compute[188703]: 2026-02-24 15:44:46.968 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:44:46 compute-0 nova_compute[188703]: 2026-02-24 15:44:46.969 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:44:46 compute-0 nova_compute[188703]: 2026-02-24 15:44:46.969 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:44:46 compute-0 nova_compute[188703]: 2026-02-24 15:44:46.969 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.428 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.431 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5697MB free_disk=72.2925033569336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.432 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.432 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.540 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.541 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.577 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.622 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.624 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:44:47 compute-0 nova_compute[188703]: 2026-02-24 15:44:47.624 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:44:52 compute-0 podman[241768]: 2026-02-24 15:44:52.171594028 +0000 UTC m=+0.124679684 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 15:44:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:44:55.701 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:44:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:44:55.701 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:44:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:44:55.702 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:44:59 compute-0 podman[204685]: time="2026-02-24T15:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:44:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 15:44:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3882 "" "Go-http-client/1.1"
Feb 24 15:45:01 compute-0 openstack_network_exporter[207830]: ERROR   15:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:45:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:45:01 compute-0 openstack_network_exporter[207830]: ERROR   15:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:45:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:45:05 compute-0 podman[241793]: 2026-02-24 15:45:05.135976365 +0000 UTC m=+0.083407844 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:45:05 compute-0 podman[241794]: 2026-02-24 15:45:05.17493001 +0000 UTC m=+0.115723055 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 15:45:07 compute-0 podman[241833]: 2026-02-24 15:45:07.157364175 +0000 UTC m=+0.111764494 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Feb 24 15:45:10 compute-0 podman[241853]: 2026-02-24 15:45:10.1338231 +0000 UTC m=+0.092166799 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release-0.7.12=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Feb 24 15:45:13 compute-0 podman[241873]: 2026-02-24 15:45:13.174444602 +0000 UTC m=+0.127587125 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, version=9.7, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z)
Feb 24 15:45:16 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:16.335 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:45:16 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:16.336 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 15:45:17 compute-0 podman[241894]: 2026-02-24 15:45:17.138038056 +0000 UTC m=+0.092510428 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825)
Feb 24 15:45:17 compute-0 podman[241895]: 2026-02-24 15:45:17.192944295 +0000 UTC m=+0.145638678 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 24 15:45:17 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:17.339 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:45:23 compute-0 podman[241939]: 2026-02-24 15:45:23.131222657 +0000 UTC m=+0.086881551 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.293 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.294 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.351 188707 DEBUG nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.523 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.524 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.536 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.537 188707 INFO nova.compute.claims [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Claim successful on node compute-0.ctlplane.example.com
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.694 188707 DEBUG nova.compute.provider_tree [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.709 188707 DEBUG nova.scheduler.client.report [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.729 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.730 188707 DEBUG nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.797 188707 DEBUG nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.798 188707 DEBUG nova.network.neutron [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.854 188707 INFO nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 15:45:27 compute-0 nova_compute[188703]: 2026-02-24 15:45:27.934 188707 DEBUG nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 15:45:28 compute-0 nova_compute[188703]: 2026-02-24 15:45:28.020 188707 DEBUG nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 15:45:28 compute-0 nova_compute[188703]: 2026-02-24 15:45:28.022 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 15:45:28 compute-0 nova_compute[188703]: 2026-02-24 15:45:28.022 188707 INFO nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Creating image(s)
Feb 24 15:45:28 compute-0 nova_compute[188703]: 2026-02-24 15:45:28.026 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:28 compute-0 nova_compute[188703]: 2026-02-24 15:45:28.026 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:28 compute-0 nova_compute[188703]: 2026-02-24 15:45:28.027 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:28 compute-0 nova_compute[188703]: 2026-02-24 15:45:28.028 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:28 compute-0 nova_compute[188703]: 2026-02-24 15:45:28.029 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.418 188707 WARNING oslo_policy.policy [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.419 188707 WARNING oslo_policy.policy [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.490 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.582 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759.part --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.583 188707 DEBUG nova.virt.images [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] de6b8fc8-e0dc-4bbf-943b-e6ac6027af11 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.585 188707 DEBUG nova.privsep.utils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.586 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759.part /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:29 compute-0 podman[204685]: time="2026-02-24T15:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:45:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 15:45:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3887 "" "Go-http-client/1.1"
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.822 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759.part /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759.converted" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.827 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.882 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759.converted --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.884 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:29 compute-0 nova_compute[188703]: 2026-02-24 15:45:29.911 188707 INFO oslo.privsep.daemon [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpih5osf__/privsep.sock']
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.535 188707 INFO oslo.privsep.daemon [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Spawned new privsep daemon via rootwrap
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.413 241980 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.417 241980 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.419 241980 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.419 241980 INFO oslo.privsep.daemon [-] privsep daemon running as pid 241980
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.607 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.673 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.674 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.675 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.694 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.769 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.770 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759,backing_fmt=raw /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.862 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759,backing_fmt=raw /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk 1073741824" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.863 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.863 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.942 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.944 188707 DEBUG nova.virt.disk.api [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Checking if we can resize image /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 15:45:30 compute-0 nova_compute[188703]: 2026-02-24 15:45:30.945 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.009 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.011 188707 DEBUG nova.virt.disk.api [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Cannot resize image /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.012 188707 DEBUG nova.objects.instance [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'migration_context' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.064 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.065 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.067 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.068 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.069 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.071 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.097 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.098 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.151 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.153 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.184 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.261 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.262 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.262 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.273 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.297 188707 DEBUG nova.network.neutron [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Successfully created port: 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.323 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.325 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.362 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.363 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.364 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:31 compute-0 openstack_network_exporter[207830]: ERROR   15:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:45:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:45:31 compute-0 openstack_network_exporter[207830]: ERROR   15:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:45:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.446 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.448 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.449 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Ensure instance console log exists: /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.450 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.451 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:31 compute-0 nova_compute[188703]: 2026-02-24 15:45:31.451 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:33 compute-0 nova_compute[188703]: 2026-02-24 15:45:33.518 188707 DEBUG nova.network.neutron [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Successfully updated port: 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 15:45:33 compute-0 nova_compute[188703]: 2026-02-24 15:45:33.536 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:45:33 compute-0 nova_compute[188703]: 2026-02-24 15:45:33.536 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:45:33 compute-0 nova_compute[188703]: 2026-02-24 15:45:33.537 188707 DEBUG nova.network.neutron [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 15:45:33 compute-0 nova_compute[188703]: 2026-02-24 15:45:33.794 188707 DEBUG nova.network.neutron [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.082 188707 DEBUG nova.compute.manager [req-4049811e-8d67-4bce-9b9b-f7e78b8b559c req-4d277372-abac-47dd-be8a-b2fba57bcae0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received event network-changed-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.083 188707 DEBUG nova.compute.manager [req-4049811e-8d67-4bce-9b9b-f7e78b8b559c req-4d277372-abac-47dd-be8a-b2fba57bcae0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Refreshing instance network info cache due to event network-changed-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.084 188707 DEBUG oslo_concurrency.lockutils [req-4049811e-8d67-4bce-9b9b-f7e78b8b559c req-4d277372-abac-47dd-be8a-b2fba57bcae0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.933 188707 DEBUG nova.network.neutron [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.960 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.960 188707 DEBUG nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Instance network_info: |[{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.962 188707 DEBUG oslo_concurrency.lockutils [req-4049811e-8d67-4bce-9b9b-f7e78b8b559c req-4d277372-abac-47dd-be8a-b2fba57bcae0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.962 188707 DEBUG nova.network.neutron [req-4049811e-8d67-4bce-9b9b-f7e78b8b559c req-4d277372-abac-47dd-be8a-b2fba57bcae0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Refreshing network info cache for port 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.969 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Start _get_guest_xml network_info=[{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:44:16Z,direct_url=<?>,disk_format='qcow2',id=de6b8fc8-e0dc-4bbf-943b-e6ac6027af11,min_disk=0,min_ram=0,name='cirros',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:44:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.980 188707 WARNING nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.995 188707 DEBUG nova.virt.libvirt.host [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 15:45:34 compute-0 nova_compute[188703]: 2026-02-24 15:45:34.996 188707 DEBUG nova.virt.libvirt.host [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.003 188707 DEBUG nova.virt.libvirt.host [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.004 188707 DEBUG nova.virt.libvirt.host [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.005 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.006 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T15:44:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='521ca388-0b2e-40c6-bb06-118d4ed86b49',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:44:16Z,direct_url=<?>,disk_format='qcow2',id=de6b8fc8-e0dc-4bbf-943b-e6ac6027af11,min_disk=0,min_ram=0,name='cirros',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:44:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.007 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.007 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.008 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.009 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.009 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.010 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.011 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.012 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.012 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.013 188707 DEBUG nova.virt.hardware [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.019 188707 DEBUG nova.privsep.utils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.022 188707 DEBUG nova.virt.libvirt.vif [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T15:45:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-tehi0e8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T15:45:27Z,user_data=None,user_id='bd338d866e3242aeb685fec99c451955',uuid=fd83ae88-f3e1-49ef-8167-b8451d014cf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.023 188707 DEBUG nova.network.os_vif_util [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.025 188707 DEBUG nova.network.os_vif_util [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:4c:f6,bridge_name='br-int',has_traffic_filtering=True,id=4fe2ff99-5ba5-49b4-a275-e8c5c9b51888,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe2ff99-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.027 188707 DEBUG nova.objects.instance [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.043 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] End _get_guest_xml xml=<domain type="kvm">
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <uuid>fd83ae88-f3e1-49ef-8167-b8451d014cf7</uuid>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <name>instance-00000001</name>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <memory>524288</memory>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <metadata>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <nova:name>test_0</nova:name>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 15:45:34</nova:creationTime>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <nova:flavor name="m1.small">
Feb 24 15:45:35 compute-0 nova_compute[188703]:         <nova:memory>512</nova:memory>
Feb 24 15:45:35 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 15:45:35 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 15:45:35 compute-0 nova_compute[188703]:         <nova:ephemeral>1</nova:ephemeral>
Feb 24 15:45:35 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 15:45:35 compute-0 nova_compute[188703]:         <nova:user uuid="bd338d866e3242aeb685fec99c451955">admin</nova:user>
Feb 24 15:45:35 compute-0 nova_compute[188703]:         <nova:project uuid="4407f5b870e145d8917119ad928717e8">admin</nova:project>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="de6b8fc8-e0dc-4bbf-943b-e6ac6027af11"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 15:45:35 compute-0 nova_compute[188703]:         <nova:port uuid="4fe2ff99-5ba5-49b4-a275-e8c5c9b51888">
Feb 24 15:45:35 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="192.168.0.39" ipVersion="4"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   </metadata>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <system>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <entry name="serial">fd83ae88-f3e1-49ef-8167-b8451d014cf7</entry>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <entry name="uuid">fd83ae88-f3e1-49ef-8167-b8451d014cf7</entry>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </system>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <os>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   </os>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <features>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <apic/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   </features>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   </clock>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   </cpu>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   <devices>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <target dev="vdb" bus="virtio"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.config"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:1e:4c:f6"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <target dev="tap4fe2ff99-5b"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </interface>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/console.log" append="off"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </serial>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <video>
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </video>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </rng>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 15:45:35 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 15:45:35 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 15:45:35 compute-0 nova_compute[188703]:   </devices>
Feb 24 15:45:35 compute-0 nova_compute[188703]: </domain>
Feb 24 15:45:35 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.044 188707 DEBUG nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Preparing to wait for external event network-vif-plugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.045 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.045 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.045 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.046 188707 DEBUG nova.virt.libvirt.vif [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T15:45:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-tehi0e8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T15:45:27Z,user_data=None,user_id='bd338d866e3242aeb685fec99c451955',uuid=fd83ae88-f3e1-49ef-8167-b8451d014cf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.046 188707 DEBUG nova.network.os_vif_util [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.047 188707 DEBUG nova.network.os_vif_util [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:4c:f6,bridge_name='br-int',has_traffic_filtering=True,id=4fe2ff99-5ba5-49b4-a275-e8c5c9b51888,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe2ff99-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.048 188707 DEBUG os_vif [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:4c:f6,bridge_name='br-int',has_traffic_filtering=True,id=4fe2ff99-5ba5-49b4-a275-e8c5c9b51888,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe2ff99-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.093 188707 DEBUG ovsdbapp.backend.ovs_idl [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.094 188707 DEBUG ovsdbapp.backend.ovs_idl [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.094 188707 DEBUG ovsdbapp.backend.ovs_idl [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.095 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.095 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.095 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.096 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.098 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.100 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.111 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.111 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.112 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.113 188707 INFO oslo.privsep.daemon [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpqt_rdk96/privsep.sock']
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.732 188707 INFO oslo.privsep.daemon [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Spawned new privsep daemon via rootwrap
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.610 242017 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.615 242017 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.618 242017 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Feb 24 15:45:35 compute-0 nova_compute[188703]: 2026-02-24 15:45:35.619 242017 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242017
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.061 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.061 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe2ff99-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.062 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe2ff99-5b, col_values=(('external_ids', {'iface-id': '4fe2ff99-5ba5-49b4-a275-e8c5c9b51888', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:4c:f6', 'vm-uuid': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.064 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:36 compute-0 NetworkManager[56995]: <info>  [1771947936.0666] manager: (tap4fe2ff99-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.067 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.075 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.076 188707 INFO os_vif [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:4c:f6,bridge_name='br-int',has_traffic_filtering=True,id=4fe2ff99-5ba5-49b4-a275-e8c5c9b51888,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe2ff99-5b')
Feb 24 15:45:36 compute-0 podman[242021]: 2026-02-24 15:45:36.144990409 +0000 UTC m=+0.097608051 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.155 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.155 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.156 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.156 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No VIF found with MAC fa:16:3e:1e:4c:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.157 188707 INFO nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Using config drive
Feb 24 15:45:36 compute-0 podman[242022]: 2026-02-24 15:45:36.16153525 +0000 UTC m=+0.113520723 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.373 188707 DEBUG nova.network.neutron [req-4049811e-8d67-4bce-9b9b-f7e78b8b559c req-4d277372-abac-47dd-be8a-b2fba57bcae0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated VIF entry in instance network info cache for port 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.374 188707 DEBUG nova.network.neutron [req-4049811e-8d67-4bce-9b9b-f7e78b8b559c req-4d277372-abac-47dd-be8a-b2fba57bcae0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.396 188707 DEBUG oslo_concurrency.lockutils [req-4049811e-8d67-4bce-9b9b-f7e78b8b559c req-4d277372-abac-47dd-be8a-b2fba57bcae0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.589 188707 INFO nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Creating config drive at /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.config
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.596 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeajt9shy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.724 188707 DEBUG oslo_concurrency.processutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeajt9shy" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:36 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb 24 15:45:36 compute-0 kernel: tap4fe2ff99-5b: entered promiscuous mode
Feb 24 15:45:36 compute-0 NetworkManager[56995]: <info>  [1771947936.8219] manager: (tap4fe2ff99-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Feb 24 15:45:36 compute-0 ovn_controller[98701]: 2026-02-24T15:45:36Z|00027|binding|INFO|Claiming lport 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 for this chassis.
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.826 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:36 compute-0 ovn_controller[98701]: 2026-02-24T15:45:36Z|00028|binding|INFO|4fe2ff99-5ba5-49b4-a275-e8c5c9b51888: Claiming fa:16:3e:1e:4c:f6 192.168.0.39
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.837 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:36.852 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:4c:f6 192.168.0.39'], port_security=['fa:16:3e:1e:4c:f6 192.168.0.39'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.39/24', 'neutron:device_id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-863f062e-1672-4c9a-8889-3b2ee95f838a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4407f5b870e145d8917119ad928717e8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9038fe38-7d22-46f5-bd37-0cab71bf22d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231de057-8460-4792-a8ff-f638ed53c1a8, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=4fe2ff99-5ba5-49b4-a275-e8c5c9b51888) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:45:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:36.855 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 in datapath 863f062e-1672-4c9a-8889-3b2ee95f838a bound to our chassis
Feb 24 15:45:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:36.861 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 863f062e-1672-4c9a-8889-3b2ee95f838a
Feb 24 15:45:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:36.863 108026 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpcw9tsnv_/privsep.sock']
Feb 24 15:45:36 compute-0 systemd-udevd[242086]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.894 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:36 compute-0 ovn_controller[98701]: 2026-02-24T15:45:36Z|00029|binding|INFO|Setting lport 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 ovn-installed in OVS
Feb 24 15:45:36 compute-0 ovn_controller[98701]: 2026-02-24T15:45:36Z|00030|binding|INFO|Setting lport 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 up in Southbound
Feb 24 15:45:36 compute-0 NetworkManager[56995]: <info>  [1771947936.9004] device (tap4fe2ff99-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 15:45:36 compute-0 nova_compute[188703]: 2026-02-24 15:45:36.898 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:36 compute-0 NetworkManager[56995]: <info>  [1771947936.9022] device (tap4fe2ff99-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 15:45:36 compute-0 systemd-machined[158049]: New machine qemu-1-instance-00000001.
Feb 24 15:45:36 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.024 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.385 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771947937.3851895, fd83ae88-f3e1-49ef-8167-b8451d014cf7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.386 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] VM Started (Lifecycle Event)
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.416 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.422 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771947937.3853383, fd83ae88-f3e1-49ef-8167-b8451d014cf7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.423 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] VM Paused (Lifecycle Event)
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.438 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.445 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.470 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.499 188707 DEBUG nova.compute.manager [req-42462c6c-6cf4-46af-b155-e13e37d64189 req-a53bd497-32aa-4cd3-8cb6-08e724517424 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received event network-vif-plugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.500 188707 DEBUG oslo_concurrency.lockutils [req-42462c6c-6cf4-46af-b155-e13e37d64189 req-a53bd497-32aa-4cd3-8cb6-08e724517424 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.501 188707 DEBUG oslo_concurrency.lockutils [req-42462c6c-6cf4-46af-b155-e13e37d64189 req-a53bd497-32aa-4cd3-8cb6-08e724517424 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.502 188707 DEBUG oslo_concurrency.lockutils [req-42462c6c-6cf4-46af-b155-e13e37d64189 req-a53bd497-32aa-4cd3-8cb6-08e724517424 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.503 188707 DEBUG nova.compute.manager [req-42462c6c-6cf4-46af-b155-e13e37d64189 req-a53bd497-32aa-4cd3-8cb6-08e724517424 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Processing event network-vif-plugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.505 188707 DEBUG nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.514 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.515 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771947937.5134976, fd83ae88-f3e1-49ef-8167-b8451d014cf7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.515 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] VM Resumed (Lifecycle Event)
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.524 188707 INFO nova.virt.libvirt.driver [-] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Instance spawned successfully.
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.526 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.535 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.542 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.571 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.574 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.574 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.575 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.576 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.576 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.577 188707 DEBUG nova.virt.libvirt.driver [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:45:37 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:37.614 108026 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 24 15:45:37 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:37.615 108026 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpcw9tsnv_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 24 15:45:37 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:37.473 242109 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 24 15:45:37 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:37.479 242109 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 24 15:45:37 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:37.482 242109 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Feb 24 15:45:37 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:37.483 242109 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242109
Feb 24 15:45:37 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:37.621 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4d3f10-c2f2-4fa8-8fea-f430a74401c7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.631 188707 INFO nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Took 9.61 seconds to spawn the instance on the hypervisor.
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.632 188707 DEBUG nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.711 188707 INFO nova.compute.manager [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Took 10.23 seconds to build instance.
Feb 24 15:45:37 compute-0 nova_compute[188703]: 2026-02-24 15:45:37.739 188707 DEBUG oslo_concurrency.lockutils [None req-13d197db-fd4b-43bb-9b77-81972faa2d98 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.104 242109 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.104 242109 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.104 242109 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:38 compute-0 podman[242114]: 2026-02-24 15:45:38.145692872 +0000 UTC m=+0.095972424 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.645 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8d26a4-2466-44a3-9258-d4654a575871]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.646 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap863f062e-11 in ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.649 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap863f062e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.649 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7c40fe-a22b-4219-a89a-a700cff70ab0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.652 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[d81d4e1a-d7c2-4abd-be82-7514cc33d589]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.675 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[bbede844-300e-4671-bb75-852dabc098ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.691 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[fa271f4c-9b66-458a-b33f-fdb1f94bf772]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:38.692 108026 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp15pr2rtb/privsep.sock']
Feb 24 15:45:39 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 24 15:45:39 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.346 108026 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.346 108026 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp15pr2rtb/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.220 242142 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.226 242142 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.232 242142 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.232 242142 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242142
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.350 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[83d71b89-06d7-41ab-a795-570d1679871a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:39 compute-0 nova_compute[188703]: 2026-02-24 15:45:39.632 188707 DEBUG nova.compute.manager [req-aeb082fd-1a0f-491f-a956-64221a3bc5b1 req-8ea1900c-a68b-4aca-855a-d065a65f125b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received event network-vif-plugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:45:39 compute-0 nova_compute[188703]: 2026-02-24 15:45:39.632 188707 DEBUG oslo_concurrency.lockutils [req-aeb082fd-1a0f-491f-a956-64221a3bc5b1 req-8ea1900c-a68b-4aca-855a-d065a65f125b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:39 compute-0 nova_compute[188703]: 2026-02-24 15:45:39.633 188707 DEBUG oslo_concurrency.lockutils [req-aeb082fd-1a0f-491f-a956-64221a3bc5b1 req-8ea1900c-a68b-4aca-855a-d065a65f125b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:39 compute-0 nova_compute[188703]: 2026-02-24 15:45:39.633 188707 DEBUG oslo_concurrency.lockutils [req-aeb082fd-1a0f-491f-a956-64221a3bc5b1 req-8ea1900c-a68b-4aca-855a-d065a65f125b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:39 compute-0 nova_compute[188703]: 2026-02-24 15:45:39.633 188707 DEBUG nova.compute.manager [req-aeb082fd-1a0f-491f-a956-64221a3bc5b1 req-8ea1900c-a68b-4aca-855a-d065a65f125b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] No waiting events found dispatching network-vif-plugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 15:45:39 compute-0 nova_compute[188703]: 2026-02-24 15:45:39.633 188707 WARNING nova.compute.manager [req-aeb082fd-1a0f-491f-a956-64221a3bc5b1 req-8ea1900c-a68b-4aca-855a-d065a65f125b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received unexpected event network-vif-plugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 for instance with vm_state active and task_state None.
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.827 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.828 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0c8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:45:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:39.840 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.861 242142 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.861 242142 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:39 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:39.861 242142 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.229 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/fd83ae88-f3e1-49ef-8167-b8451d014cf7 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.407 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[eef92931-a646-40f1-8b86-8b5724918079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 NetworkManager[56995]: <info>  [1771947940.4409] manager: (tap863f062e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.442 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[eceb03c8-cb08-4800-b747-c94b55a9d670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.469 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0fd25a-9c8d-4d82-98c7-22ab94976b8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.474 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[aa572023-6178-479b-bd75-e2de9f8c1470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 systemd-udevd[242182]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:45:40 compute-0 NetworkManager[56995]: <info>  [1771947940.4998] device (tap863f062e-10): carrier: link connected
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.505 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[40aabdad-4b34-45af-bf63-b328c115e091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.525 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5cb024-bb89-4a3d-9611-3c90ceba5319]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap863f062e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:6f:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365300, 'reachable_time': 44855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242189, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.537 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5bc94c-9bcb-4abb-b916-f30cdb36fa35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:6f55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365300, 'tstamp': 365300}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242204, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.551 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[de02aabb-9862-441f-83b6-4ac0bcf13ecc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap863f062e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:6f:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365300, 'reachable_time': 44855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242209, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.576 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[0aeaf545-5613-4f01-bac0-066f5a66ddff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 podman[242171]: 2026-02-24 15:45:40.608741515 +0000 UTC m=+0.137043219 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, name=ubi9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.633 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[9d82bb85-bbda-493f-ac0b-3de22c8cbd87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.637 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap863f062e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.638 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.638 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap863f062e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:45:40 compute-0 kernel: tap863f062e-10: entered promiscuous mode
Feb 24 15:45:40 compute-0 NetworkManager[56995]: <info>  [1771947940.6440] manager: (tap863f062e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Feb 24 15:45:40 compute-0 nova_compute[188703]: 2026-02-24 15:45:40.642 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:40 compute-0 nova_compute[188703]: 2026-02-24 15:45:40.647 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.649 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap863f062e-10, col_values=(('external_ids', {'iface-id': 'e7d10e1c-8dfe-4042-832a-f76958f5496a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:45:40 compute-0 ovn_controller[98701]: 2026-02-24T15:45:40Z|00031|binding|INFO|Releasing lport e7d10e1c-8dfe-4042-832a-f76958f5496a from this chassis (sb_readonly=0)
Feb 24 15:45:40 compute-0 nova_compute[188703]: 2026-02-24 15:45:40.652 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:40 compute-0 nova_compute[188703]: 2026-02-24 15:45:40.660 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.661 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/863f062e-1672-4c9a-8889-3b2ee95f838a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/863f062e-1672-4c9a-8889-3b2ee95f838a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.662 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[79cbc8fc-c638-4c32-9daf-ab42d76eeafb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.666 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: global
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-863f062e-1672-4c9a-8889-3b2ee95f838a
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/863f062e-1672-4c9a-8889-3b2ee95f838a.pid.haproxy
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID 863f062e-1672-4c9a-8889-3b2ee95f838a
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 15:45:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:40.667 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'env', 'PROCESS_TAG=haproxy-863f062e-1672-4c9a-8889-3b2ee95f838a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/863f062e-1672-4c9a-8889-3b2ee95f838a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.672 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1727 Content-Type: application/json Date: Tue, 24 Feb 2026 15:45:40 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-db55fbb4-d011-4cd7-9ac7-9a02ca412162 x-openstack-request-id: req-db55fbb4-d011-4cd7-9ac7-9a02ca412162 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.673 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "fd83ae88-f3e1-49ef-8167-b8451d014cf7", "name": "test_0", "status": "ACTIVE", "tenant_id": "4407f5b870e145d8917119ad928717e8", "user_id": "bd338d866e3242aeb685fec99c451955", "metadata": {}, "hostId": "781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62", "image": {"id": "de6b8fc8-e0dc-4bbf-943b-e6ac6027af11", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/de6b8fc8-e0dc-4bbf-943b-e6ac6027af11"}]}, "flavor": {"id": "521ca388-0b2e-40c6-bb06-118d4ed86b49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/521ca388-0b2e-40c6-bb06-118d4ed86b49"}]}, "created": "2026-02-24T15:45:24Z", "updated": "2026-02-24T15:45:37Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.39", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:1e:4c:f6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/fd83ae88-f3e1-49ef-8167-b8451d014cf7"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/fd83ae88-f3e1-49ef-8167-b8451d014cf7"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-24T15:45:37.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.673 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/fd83ae88-f3e1-49ef-8167-b8451d014cf7 used request id req-db55fbb4-d011-4cd7-9ac7-9a02ca412162 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.675 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'name': 'test_0', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.676 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T15:45:40.676639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.708 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.708 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance fd83ae88-f3e1-49ef-8167-b8451d014cf7: ceilometer.compute.pollsters.NoVolumeException
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T15:45:40.709604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.743 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.745 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.745 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.746 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.746 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.747 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.747 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.748 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T15:45:40.747401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.758 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for fd83ae88-f3e1-49ef-8167-b8451d014cf7 / tap4fe2ff99-5b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.758 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.759 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.759 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.759 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.760 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.760 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.760 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.760 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.761 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.761 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.761 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.761 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.762 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.761 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T15:45:40.760173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.762 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T15:45:40.761731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.762 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.762 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.762 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.763 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.763 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.765 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T15:45:40.763138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.766 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.767 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.767 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.768 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.768 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.769 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T15:45:40.768752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.770 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.771 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.772 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.772 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.773 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.773 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.773 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.773 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.774 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T15:45:40.773590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.845 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.846 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.846 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.847 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.847 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.848 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.848 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.848 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.849 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.849 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T15:45:40.848361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.850 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 522265338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.850 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T15:45:40.850037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.850 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 4211047 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.851 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.852 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/cpu volume: 3060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.853 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T15:45:40.852225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T15:45:40.853784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.857 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.861 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.861 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.862 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.863 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.863 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T15:45:40.863407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.864 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T15:45:40.865443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.865 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.870 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.872 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.875 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T15:45:40.876694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.879 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.880 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.880 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.881 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.881 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.881 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.881 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.881 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.881 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.881 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.882 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.882 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.882 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.882 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.882 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.882 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.883 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T15:45:40.881520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.883 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.883 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.883 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.883 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.884 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.884 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T15:45:40.883039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.884 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.884 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.884 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.884 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.884 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-24T15:45:40.884751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.885 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.889 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.890 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.890 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.890 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.891 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.891 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.891 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.893 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.893 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.894 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.895 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.895 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T15:45:40.890504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.896 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.896 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.896 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.896 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T15:45:40.893429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T15:45:40.894308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.898 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T15:45:40.895346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.901 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.902 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.904 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T15:45:40.896287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.906 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-24T15:45:40.900186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T15:45:40.907056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.907 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.908 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.908 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T15:45:40.908728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.916 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.916 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:45:40.916 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:45:41 compute-0 nova_compute[188703]: 2026-02-24 15:45:41.066 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:41 compute-0 podman[242245]: 2026-02-24 15:45:41.136878157 +0000 UTC m=+0.092914199 container create 324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 24 15:45:41 compute-0 systemd[1]: Started libpod-conmon-324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d.scope.
Feb 24 15:45:41 compute-0 podman[242245]: 2026-02-24 15:45:41.094225459 +0000 UTC m=+0.050261541 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 15:45:41 compute-0 systemd[1]: Started libcrun container.
Feb 24 15:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9251c3eaa094565743e690daf4a529b9b6c4f91a44e371326b9f997792597b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 15:45:41 compute-0 podman[242245]: 2026-02-24 15:45:41.242899381 +0000 UTC m=+0.198935463 container init 324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 24 15:45:41 compute-0 podman[242245]: 2026-02-24 15:45:41.253938378 +0000 UTC m=+0.209974450 container start 324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0)
Feb 24 15:45:41 compute-0 neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a[242258]: [NOTICE]   (242262) : New worker (242264) forked
Feb 24 15:45:41 compute-0 neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a[242258]: [NOTICE]   (242262) : Loading success.
Feb 24 15:45:42 compute-0 nova_compute[188703]: 2026-02-24 15:45:42.028 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:42 compute-0 nova_compute[188703]: 2026-02-24 15:45:42.624 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:45:43 compute-0 nova_compute[188703]: 2026-02-24 15:45:43.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:45:44 compute-0 podman[242273]: 2026-02-24 15:45:44.14414692 +0000 UTC m=+0.099361529 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, version=9.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, vcs-type=git, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 24 15:45:44 compute-0 nova_compute[188703]: 2026-02-24 15:45:44.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:45:44 compute-0 nova_compute[188703]: 2026-02-24 15:45:44.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:45:44 compute-0 nova_compute[188703]: 2026-02-24 15:45:44.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:45:45 compute-0 nova_compute[188703]: 2026-02-24 15:45:45.354 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:45:45 compute-0 nova_compute[188703]: 2026-02-24 15:45:45.355 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:45:45 compute-0 nova_compute[188703]: 2026-02-24 15:45:45.355 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:45:45 compute-0 nova_compute[188703]: 2026-02-24 15:45:45.356 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:45:46 compute-0 nova_compute[188703]: 2026-02-24 15:45:46.068 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.005 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.023 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.024 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.025 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.025 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.026 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.027 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.030 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:45:47 compute-0 nova_compute[188703]: 2026-02-24 15:45:47.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:45:48 compute-0 podman[242296]: 2026-02-24 15:45:48.135829726 +0000 UTC m=+0.089391061 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825)
Feb 24 15:45:48 compute-0 podman[242297]: 2026-02-24 15:45:48.208527902 +0000 UTC m=+0.149121085 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:45:48 compute-0 nova_compute[188703]: 2026-02-24 15:45:48.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:45:48 compute-0 nova_compute[188703]: 2026-02-24 15:45:48.972 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:48 compute-0 nova_compute[188703]: 2026-02-24 15:45:48.973 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:48 compute-0 nova_compute[188703]: 2026-02-24 15:45:48.973 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:48 compute-0 nova_compute[188703]: 2026-02-24 15:45:48.974 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.112 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.199 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.202 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.279 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.281 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.361 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.362 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.422 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.888 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.889 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5249MB free_disk=72.2592544555664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.890 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.890 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.980 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.981 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:45:49 compute-0 nova_compute[188703]: 2026-02-24 15:45:49.981 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.036 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.082 188707 ERROR nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [req-be200250-9734-42e8-8186-249e4435eb28] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 3c29c547-d990-4bd5-9bfd-810bbeade4e4.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-be200250-9734-42e8-8186-249e4435eb28"}]}
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.097 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.120 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.121 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.138 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.168 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.209 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.245 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updated inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.245 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.246 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.267 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:45:50 compute-0 nova_compute[188703]: 2026-02-24 15:45:50.268 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:51 compute-0 nova_compute[188703]: 2026-02-24 15:45:51.072 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:51 compute-0 ovn_controller[98701]: 2026-02-24T15:45:51Z|00032|binding|INFO|Releasing lport e7d10e1c-8dfe-4042-832a-f76958f5496a from this chassis (sb_readonly=0)
Feb 24 15:45:51 compute-0 nova_compute[188703]: 2026-02-24 15:45:51.089 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <info>  [1771947951.0907] manager: (patch-br-int-to-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <info>  [1771947951.0915] device (patch-br-int-to-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <warn>  [1771947951.0918] device (patch-br-int-to-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <info>  [1771947951.0998] manager: (patch-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <info>  [1771947951.1034] device (patch-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <warn>  [1771947951.1035] device (patch-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <info>  [1771947951.1112] manager: (patch-br-int-to-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Feb 24 15:45:51 compute-0 ovn_controller[98701]: 2026-02-24T15:45:51Z|00033|binding|INFO|Releasing lport e7d10e1c-8dfe-4042-832a-f76958f5496a from this chassis (sb_readonly=0)
Feb 24 15:45:51 compute-0 nova_compute[188703]: 2026-02-24 15:45:51.113 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <info>  [1771947951.1165] manager: (patch-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <info>  [1771947951.1202] device (patch-br-int-to-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 24 15:45:51 compute-0 NetworkManager[56995]: <info>  [1771947951.1241] device (patch-provnet-ed61c9da-2816-42e0-8c94-e282fe423f14-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 24 15:45:51 compute-0 nova_compute[188703]: 2026-02-24 15:45:51.128 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:51 compute-0 nova_compute[188703]: 2026-02-24 15:45:51.808 188707 DEBUG nova.compute.manager [req-6dbb20d6-7705-4b3f-95dc-83c0a1a57b35 req-ea70f139-88e9-447e-89e3-bd9bdf899385 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received event network-changed-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:45:51 compute-0 nova_compute[188703]: 2026-02-24 15:45:51.809 188707 DEBUG nova.compute.manager [req-6dbb20d6-7705-4b3f-95dc-83c0a1a57b35 req-ea70f139-88e9-447e-89e3-bd9bdf899385 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Refreshing instance network info cache due to event network-changed-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 15:45:51 compute-0 nova_compute[188703]: 2026-02-24 15:45:51.809 188707 DEBUG oslo_concurrency.lockutils [req-6dbb20d6-7705-4b3f-95dc-83c0a1a57b35 req-ea70f139-88e9-447e-89e3-bd9bdf899385 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:45:51 compute-0 nova_compute[188703]: 2026-02-24 15:45:51.809 188707 DEBUG oslo_concurrency.lockutils [req-6dbb20d6-7705-4b3f-95dc-83c0a1a57b35 req-ea70f139-88e9-447e-89e3-bd9bdf899385 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:45:51 compute-0 nova_compute[188703]: 2026-02-24 15:45:51.810 188707 DEBUG nova.network.neutron [req-6dbb20d6-7705-4b3f-95dc-83c0a1a57b35 req-ea70f139-88e9-447e-89e3-bd9bdf899385 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Refreshing network info cache for port 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 15:45:52 compute-0 nova_compute[188703]: 2026-02-24 15:45:52.034 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:53 compute-0 nova_compute[188703]: 2026-02-24 15:45:53.045 188707 DEBUG nova.network.neutron [req-6dbb20d6-7705-4b3f-95dc-83c0a1a57b35 req-ea70f139-88e9-447e-89e3-bd9bdf899385 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated VIF entry in instance network info cache for port 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 15:45:53 compute-0 nova_compute[188703]: 2026-02-24 15:45:53.046 188707 DEBUG nova.network.neutron [req-6dbb20d6-7705-4b3f-95dc-83c0a1a57b35 req-ea70f139-88e9-447e-89e3-bd9bdf899385 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:45:53 compute-0 nova_compute[188703]: 2026-02-24 15:45:53.075 188707 DEBUG oslo_concurrency.lockutils [req-6dbb20d6-7705-4b3f-95dc-83c0a1a57b35 req-ea70f139-88e9-447e-89e3-bd9bdf899385 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:45:54 compute-0 podman[242354]: 2026-02-24 15:45:54.096282176 +0000 UTC m=+0.058826110 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 15:45:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:55.702 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:45:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:55.703 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:45:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:45:55.703 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:45:56 compute-0 nova_compute[188703]: 2026-02-24 15:45:56.075 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:57 compute-0 nova_compute[188703]: 2026-02-24 15:45:57.037 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:45:59 compute-0 podman[204685]: time="2026-02-24T15:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:45:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:45:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4363 "" "Go-http-client/1.1"
Feb 24 15:46:01 compute-0 nova_compute[188703]: 2026-02-24 15:46:01.078 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:01 compute-0 openstack_network_exporter[207830]: ERROR   15:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:46:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:46:01 compute-0 openstack_network_exporter[207830]: ERROR   15:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:46:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:46:02 compute-0 nova_compute[188703]: 2026-02-24 15:46:02.042 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:06 compute-0 nova_compute[188703]: 2026-02-24 15:46:06.080 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:07 compute-0 nova_compute[188703]: 2026-02-24 15:46:07.044 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:07 compute-0 podman[242378]: 2026-02-24 15:46:07.171438539 +0000 UTC m=+0.123274925 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:46:07 compute-0 podman[242379]: 2026-02-24 15:46:07.187991849 +0000 UTC m=+0.136317628 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 24 15:46:09 compute-0 podman[242419]: 2026-02-24 15:46:09.169624442 +0000 UTC m=+0.124381166 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:46:10 compute-0 ovn_controller[98701]: 2026-02-24T15:46:10Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:4c:f6 192.168.0.39
Feb 24 15:46:10 compute-0 ovn_controller[98701]: 2026-02-24T15:46:10Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:4c:f6 192.168.0.39
Feb 24 15:46:11 compute-0 nova_compute[188703]: 2026-02-24 15:46:11.085 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:11 compute-0 podman[242454]: 2026-02-24 15:46:11.149434203 +0000 UTC m=+0.105899371 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 24 15:46:12 compute-0 nova_compute[188703]: 2026-02-24 15:46:12.047 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:14 compute-0 podman[242474]: 2026-02-24 15:46:14.794402347 +0000 UTC m=+0.107538487 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, distribution-scope=public, architecture=x86_64, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 15:46:16 compute-0 nova_compute[188703]: 2026-02-24 15:46:16.088 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:17 compute-0 nova_compute[188703]: 2026-02-24 15:46:17.051 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:19 compute-0 podman[242495]: 2026-02-24 15:46:19.167160472 +0000 UTC m=+0.121022023 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825)
Feb 24 15:46:19 compute-0 podman[242496]: 2026-02-24 15:46:19.21198506 +0000 UTC m=+0.160480545 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 15:46:21 compute-0 ovn_controller[98701]: 2026-02-24T15:46:21Z|00034|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Feb 24 15:46:21 compute-0 nova_compute[188703]: 2026-02-24 15:46:21.093 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:22 compute-0 nova_compute[188703]: 2026-02-24 15:46:22.054 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:25 compute-0 podman[242543]: 2026-02-24 15:46:25.149269675 +0000 UTC m=+0.105656984 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:46:26 compute-0 nova_compute[188703]: 2026-02-24 15:46:26.099 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:27 compute-0 nova_compute[188703]: 2026-02-24 15:46:27.058 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:29 compute-0 podman[204685]: time="2026-02-24T15:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:46:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:46:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4363 "" "Go-http-client/1.1"
Feb 24 15:46:31 compute-0 nova_compute[188703]: 2026-02-24 15:46:31.102 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:31 compute-0 openstack_network_exporter[207830]: ERROR   15:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:46:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:46:31 compute-0 openstack_network_exporter[207830]: ERROR   15:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:46:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:46:32 compute-0 nova_compute[188703]: 2026-02-24 15:46:32.061 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:36 compute-0 nova_compute[188703]: 2026-02-24 15:46:36.106 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:37 compute-0 nova_compute[188703]: 2026-02-24 15:46:37.064 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:38 compute-0 podman[242569]: 2026-02-24 15:46:38.134876529 +0000 UTC m=+0.090960000 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 24 15:46:38 compute-0 podman[242568]: 2026-02-24 15:46:38.156826777 +0000 UTC m=+0.109889222 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:46:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:38.398 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:46:38 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:38.400 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 15:46:38 compute-0 nova_compute[188703]: 2026-02-24 15:46:38.401 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:40 compute-0 podman[242608]: 2026-02-24 15:46:40.182775551 +0000 UTC m=+0.136500287 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Feb 24 15:46:40 compute-0 nova_compute[188703]: 2026-02-24 15:46:40.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:40 compute-0 nova_compute[188703]: 2026-02-24 15:46:40.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 15:46:40 compute-0 nova_compute[188703]: 2026-02-24 15:46:40.960 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:41 compute-0 nova_compute[188703]: 2026-02-24 15:46:41.109 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:41.404 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:46:42 compute-0 nova_compute[188703]: 2026-02-24 15:46:42.067 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:42 compute-0 podman[242630]: 2026-02-24 15:46:42.14272496 +0000 UTC m=+0.085440340 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, name=ubi9, vendor=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=kepler, io.buildah.version=1.29.0, release-0.7.12=, managed_by=edpm_ansible)
Feb 24 15:46:43 compute-0 nova_compute[188703]: 2026-02-24 15:46:43.981 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:44 compute-0 nova_compute[188703]: 2026-02-24 15:46:44.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:44 compute-0 nova_compute[188703]: 2026-02-24 15:46:44.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:46:44 compute-0 nova_compute[188703]: 2026-02-24 15:46:44.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:46:45 compute-0 podman[242650]: 2026-02-24 15:46:45.124661063 +0000 UTC m=+0.086685401 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 15:46:45 compute-0 nova_compute[188703]: 2026-02-24 15:46:45.365 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:46:45 compute-0 nova_compute[188703]: 2026-02-24 15:46:45.366 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:46:45 compute-0 nova_compute[188703]: 2026-02-24 15:46:45.367 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:46:45 compute-0 nova_compute[188703]: 2026-02-24 15:46:45.367 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:46:46 compute-0 nova_compute[188703]: 2026-02-24 15:46:46.114 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:46 compute-0 nova_compute[188703]: 2026-02-24 15:46:46.913 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:46 compute-0 nova_compute[188703]: 2026-02-24 15:46:46.914 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:46 compute-0 nova_compute[188703]: 2026-02-24 15:46:46.937 188707 DEBUG nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.069 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.121 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.137 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.137 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.138 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.138 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.244 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.245 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.261 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.262 188707 INFO nova.compute.claims [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Claim successful on node compute-0.ctlplane.example.com
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.632 188707 DEBUG nova.compute.provider_tree [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.656 188707 DEBUG nova.scheduler.client.report [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.688 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.689 188707 DEBUG nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.757 188707 DEBUG nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.758 188707 DEBUG nova.network.neutron [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.789 188707 INFO nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.832 188707 DEBUG nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.989 188707 DEBUG nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.991 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.991 188707 INFO nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Creating image(s)
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.992 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.992 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:47 compute-0 nova_compute[188703]: 2026-02-24 15:46:47.993 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.019 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.029 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.030 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.031 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.062 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.063 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.064 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.091 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.155 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.156 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759,backing_fmt=raw /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.191 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759,backing_fmt=raw /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.193 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.193 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.250 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.252 188707 DEBUG nova.virt.disk.api [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Checking if we can resize image /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.252 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.331 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.332 188707 DEBUG nova.virt.disk.api [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Cannot resize image /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.333 188707 DEBUG nova.objects.instance [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e6fb5f9-248e-440a-9cd9-472a05ab19ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.347 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.347 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.349 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.376 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.437 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.439 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.440 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.468 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.559 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.561 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.608 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.609 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.609 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.685 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.686 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.687 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Ensure instance console log exists: /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.688 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.688 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.689 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 15:46:48 compute-0 nova_compute[188703]: 2026-02-24 15:46:48.966 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 15:46:49 compute-0 nova_compute[188703]: 2026-02-24 15:46:49.772 188707 DEBUG nova.network.neutron [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Successfully updated port: 93527468-4177-4f9e-a801-345f54dbe456 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 15:46:49 compute-0 nova_compute[188703]: 2026-02-24 15:46:49.791 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:46:49 compute-0 nova_compute[188703]: 2026-02-24 15:46:49.792 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquired lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:46:49 compute-0 nova_compute[188703]: 2026-02-24 15:46:49.792 188707 DEBUG nova.network.neutron [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 15:46:49 compute-0 nova_compute[188703]: 2026-02-24 15:46:49.885 188707 DEBUG nova.compute.manager [req-0499d880-3750-485b-b2a9-d96f79effcc3 req-5a3ca46f-7a42-452c-856c-a14f1cb8f412 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Received event network-changed-93527468-4177-4f9e-a801-345f54dbe456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:46:49 compute-0 nova_compute[188703]: 2026-02-24 15:46:49.886 188707 DEBUG nova.compute.manager [req-0499d880-3750-485b-b2a9-d96f79effcc3 req-5a3ca46f-7a42-452c-856c-a14f1cb8f412 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Refreshing instance network info cache due to event network-changed-93527468-4177-4f9e-a801-345f54dbe456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 15:46:49 compute-0 nova_compute[188703]: 2026-02-24 15:46:49.886 188707 DEBUG oslo_concurrency.lockutils [req-0499d880-3750-485b-b2a9-d96f79effcc3 req-5a3ca46f-7a42-452c-856c-a14f1cb8f412 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:46:49 compute-0 nova_compute[188703]: 2026-02-24 15:46:49.934 188707 DEBUG nova.network.neutron [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 15:46:50 compute-0 podman[242700]: 2026-02-24 15:46:50.152406907 +0000 UTC m=+0.092980701 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 15:46:50 compute-0 podman[242701]: 2026-02-24 15:46:50.232212143 +0000 UTC m=+0.169922795 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.897 188707 DEBUG nova.network.neutron [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updating instance_info_cache with network_info: [{"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.922 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Releasing lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.923 188707 DEBUG nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Instance network_info: |[{"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.923 188707 DEBUG oslo_concurrency.lockutils [req-0499d880-3750-485b-b2a9-d96f79effcc3 req-5a3ca46f-7a42-452c-856c-a14f1cb8f412 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.924 188707 DEBUG nova.network.neutron [req-0499d880-3750-485b-b2a9-d96f79effcc3 req-5a3ca46f-7a42-452c-856c-a14f1cb8f412 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Refreshing network info cache for port 93527468-4177-4f9e-a801-345f54dbe456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.931 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Start _get_guest_xml network_info=[{"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:44:16Z,direct_url=<?>,disk_format='qcow2',id=de6b8fc8-e0dc-4bbf-943b-e6ac6027af11,min_disk=0,min_ram=0,name='cirros',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:44:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.944 188707 WARNING nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.963 188707 DEBUG nova.virt.libvirt.host [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.964 188707 DEBUG nova.virt.libvirt.host [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.965 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.973 188707 DEBUG nova.virt.libvirt.host [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.974 188707 DEBUG nova.virt.libvirt.host [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.975 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.976 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T15:44:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='521ca388-0b2e-40c6-bb06-118d4ed86b49',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:44:16Z,direct_url=<?>,disk_format='qcow2',id=de6b8fc8-e0dc-4bbf-943b-e6ac6027af11,min_disk=0,min_ram=0,name='cirros',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:44:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.977 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.977 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.978 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.978 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.979 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.979 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.980 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.980 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.980 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.981 188707 DEBUG nova.virt.hardware [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.987 188707 DEBUG nova.virt.libvirt.vif [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T15:46:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2',id=2,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='105127c2-20fd-4471-8609-2ac19fea2fd2'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-00l0gxsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T15:46:47Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU5OTY5NDA2MzYzMjY3NjI1ODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTk5Njk0MDYzNjMyNjc2MjU4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU5OTY5NDA2MzYzMjY3NjI1ODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Feb 24 15:46:50 compute-0 nova_compute[188703]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTk5Njk0MDYzNjMyNjc2MjU4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU5OTY5NDA2MzYzMjY3NjI1ODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0tLQo=',user_id='bd338d866e3242aeb685fec99c451955',uuid=4e6fb5f9-248e-440a-9cd9-472a05ab19ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.988 188707 DEBUG nova.network.os_vif_util [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.989 188707 DEBUG nova.network.os_vif_util [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:32:ce,bridge_name='br-int',has_traffic_filtering=True,id=93527468-4177-4f9e-a801-345f54dbe456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap93527468-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:46:50 compute-0 nova_compute[188703]: 2026-02-24 15:46:50.990 188707 DEBUG nova.objects.instance [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e6fb5f9-248e-440a-9cd9-472a05ab19ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.026 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] End _get_guest_xml xml=<domain type="kvm">
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <uuid>4e6fb5f9-248e-440a-9cd9-472a05ab19ee</uuid>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <name>instance-00000002</name>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <memory>524288</memory>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <metadata>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <nova:name>vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2</nova:name>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 15:46:50</nova:creationTime>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <nova:flavor name="m1.small">
Feb 24 15:46:51 compute-0 nova_compute[188703]:         <nova:memory>512</nova:memory>
Feb 24 15:46:51 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 15:46:51 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 15:46:51 compute-0 nova_compute[188703]:         <nova:ephemeral>1</nova:ephemeral>
Feb 24 15:46:51 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 15:46:51 compute-0 nova_compute[188703]:         <nova:user uuid="bd338d866e3242aeb685fec99c451955">admin</nova:user>
Feb 24 15:46:51 compute-0 nova_compute[188703]:         <nova:project uuid="4407f5b870e145d8917119ad928717e8">admin</nova:project>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="de6b8fc8-e0dc-4bbf-943b-e6ac6027af11"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 15:46:51 compute-0 nova_compute[188703]:         <nova:port uuid="93527468-4177-4f9e-a801-345f54dbe456">
Feb 24 15:46:51 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="192.168.0.224" ipVersion="4"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   </metadata>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <system>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <entry name="serial">4e6fb5f9-248e-440a-9cd9-472a05ab19ee</entry>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <entry name="uuid">4e6fb5f9-248e-440a-9cd9-472a05ab19ee</entry>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </system>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <os>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   </os>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <features>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <apic/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   </features>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   </clock>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   </cpu>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   <devices>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <target dev="vdb" bus="virtio"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.config"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:3a:32:ce"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <target dev="tap93527468-41"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </interface>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/console.log" append="off"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </serial>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <video>
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </video>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </rng>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 15:46:51 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 15:46:51 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 15:46:51 compute-0 nova_compute[188703]:   </devices>
Feb 24 15:46:51 compute-0 nova_compute[188703]: </domain>
Feb 24 15:46:51 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.026 188707 DEBUG nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Preparing to wait for external event network-vif-plugged-93527468-4177-4f9e-a801-345f54dbe456 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.027 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.027 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.028 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.029 188707 DEBUG nova.virt.libvirt.vif [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T15:46:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2',id=2,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='105127c2-20fd-4471-8609-2ac19fea2fd2'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-00l0gxsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T15:46:47Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU5OTY5NDA2MzYzMjY3NjI1ODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTk5Njk0MDYzNjMyNjc2MjU4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU5OTY5NDA2MzYzMjY3NjI1ODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Feb 24 15:46:51 compute-0 nova_compute[188703]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTk5Njk0MDYzNjMyNjc2MjU4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU5OTY5NDA2MzYzMjY3NjI1ODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0tLQo=',user_id='bd338d866e3242aeb685fec99c451955',uuid=4e6fb5f9-248e-440a-9cd9-472a05ab19ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.029 188707 DEBUG nova.network.os_vif_util [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.030 188707 DEBUG nova.network.os_vif_util [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:32:ce,bridge_name='br-int',has_traffic_filtering=True,id=93527468-4177-4f9e-a801-345f54dbe456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap93527468-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.031 188707 DEBUG os_vif [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:32:ce,bridge_name='br-int',has_traffic_filtering=True,id=93527468-4177-4f9e-a801-345f54dbe456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap93527468-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.033 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.033 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.034 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.037 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.038 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93527468-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.038 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap93527468-41, col_values=(('external_ids', {'iface-id': '93527468-4177-4f9e-a801-345f54dbe456', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:32:ce', 'vm-uuid': '4e6fb5f9-248e-440a-9cd9-472a05ab19ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.040 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:51 compute-0 NetworkManager[56995]: <info>  [1771948011.0422] manager: (tap93527468-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.045 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.049 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.051 188707 INFO os_vif [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:32:ce,bridge_name='br-int',has_traffic_filtering=True,id=93527468-4177-4f9e-a801-345f54dbe456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap93527468-41')
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.096 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.097 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.097 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.098 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.163 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.164 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.164 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.164 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No VIF found with MAC fa:16:3e:3a:32:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.165 188707 INFO nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Using config drive
Feb 24 15:46:51 compute-0 rsyslogd[239437]: message too long (8192) with configured size 8096, begin of message is: 2026-02-24 15:46:50.987 188707 DEBUG nova.virt.libvirt.vif [None req-27e068f5-45 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 24 15:46:51 compute-0 rsyslogd[239437]: message too long (8192) with configured size 8096, begin of message is: 2026-02-24 15:46:51.029 188707 DEBUG nova.virt.libvirt.vif [None req-27e068f5-45 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.248 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.330 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.331 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.390 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.391 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.477 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.478 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.562 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.571 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.633 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.636 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.722 188707 INFO nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Creating config drive at /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.config
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.728 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxe71666g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.740 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.749 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.826 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.827 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.845 188707 DEBUG oslo_concurrency.processutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxe71666g" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:51 compute-0 kernel: tap93527468-41: entered promiscuous mode
Feb 24 15:46:51 compute-0 NetworkManager[56995]: <info>  [1771948011.9210] manager: (tap93527468-41): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Feb 24 15:46:51 compute-0 ovn_controller[98701]: 2026-02-24T15:46:51Z|00035|binding|INFO|Claiming lport 93527468-4177-4f9e-a801-345f54dbe456 for this chassis.
Feb 24 15:46:51 compute-0 ovn_controller[98701]: 2026-02-24T15:46:51Z|00036|binding|INFO|93527468-4177-4f9e-a801-345f54dbe456: Claiming fa:16:3e:3a:32:ce 192.168.0.224
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.923 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.927 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:46:51 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:51.931 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:32:ce 192.168.0.224'], port_security=['fa:16:3e:3a:32:ce 192.168.0.224'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-ifzd7ux27mgz-22n3finaao3u-a36yxyv7uiwf-port-trhq4wmeb2xv', 'neutron:cidrs': '192.168.0.224/24', 'neutron:device_id': '4e6fb5f9-248e-440a-9cd9-472a05ab19ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-863f062e-1672-4c9a-8889-3b2ee95f838a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-ifzd7ux27mgz-22n3finaao3u-a36yxyv7uiwf-port-trhq4wmeb2xv', 'neutron:project_id': '4407f5b870e145d8917119ad928717e8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9038fe38-7d22-46f5-bd37-0cab71bf22d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231de057-8460-4792-a8ff-f638ed53c1a8, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=93527468-4177-4f9e-a801-345f54dbe456) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.935 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:51 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:51.933 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 93527468-4177-4f9e-a801-345f54dbe456 in datapath 863f062e-1672-4c9a-8889-3b2ee95f838a bound to our chassis
Feb 24 15:46:51 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:51.936 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 863f062e-1672-4c9a-8889-3b2ee95f838a
Feb 24 15:46:51 compute-0 ovn_controller[98701]: 2026-02-24T15:46:51Z|00037|binding|INFO|Setting lport 93527468-4177-4f9e-a801-345f54dbe456 ovn-installed in OVS
Feb 24 15:46:51 compute-0 ovn_controller[98701]: 2026-02-24T15:46:51Z|00038|binding|INFO|Setting lport 93527468-4177-4f9e-a801-345f54dbe456 up in Southbound
Feb 24 15:46:51 compute-0 nova_compute[188703]: 2026-02-24 15:46:51.944 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:51 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:51.951 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[74255628-440f-41ec-97bd-69feaad5594c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:46:51 compute-0 systemd-udevd[242792]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:46:51 compute-0 systemd-machined[158049]: New machine qemu-2-instance-00000002.
Feb 24 15:46:51 compute-0 NetworkManager[56995]: <info>  [1771948011.9767] device (tap93527468-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 15:46:51 compute-0 NetworkManager[56995]: <info>  [1771948011.9775] device (tap93527468-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 15:46:51 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:51.977 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[af64245d-0ce5-44f6-b9cb-44d69c5a708e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:46:51 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Feb 24 15:46:51 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:51.981 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[476949a3-16e8-43fd-85da-3fd3d7d79bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:46:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:52.003 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[253748ee-01b3-443c-a026-95657d110025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:46:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:52.023 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ac71cc22-ea9a-4b84-9925-cd22d9564aac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap863f062e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:6f:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365300, 'reachable_time': 44855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242800, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:46:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:52.040 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[c91f8e86-f54b-4b0a-85aa-aec9741c6398]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365310, 'tstamp': 365310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242804, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365313, 'tstamp': 365313}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242804, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:46:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:52.042 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap863f062e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.044 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.046 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:52.046 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap863f062e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:46:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:52.046 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:46:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:52.047 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap863f062e-10, col_values=(('external_ids', {'iface-id': 'e7d10e1c-8dfe-4042-832a-f76958f5496a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:46:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:52.047 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.073 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.227 188707 DEBUG nova.compute.manager [req-8fee7a0c-bd2c-4478-a4c4-16828f0dcc20 req-9b6196f5-7a15-48ef-8aff-378ad1b9d306 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Received event network-vif-plugged-93527468-4177-4f9e-a801-345f54dbe456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.227 188707 DEBUG oslo_concurrency.lockutils [req-8fee7a0c-bd2c-4478-a4c4-16828f0dcc20 req-9b6196f5-7a15-48ef-8aff-378ad1b9d306 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.228 188707 DEBUG oslo_concurrency.lockutils [req-8fee7a0c-bd2c-4478-a4c4-16828f0dcc20 req-9b6196f5-7a15-48ef-8aff-378ad1b9d306 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.228 188707 DEBUG oslo_concurrency.lockutils [req-8fee7a0c-bd2c-4478-a4c4-16828f0dcc20 req-9b6196f5-7a15-48ef-8aff-378ad1b9d306 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.229 188707 DEBUG nova.compute.manager [req-8fee7a0c-bd2c-4478-a4c4-16828f0dcc20 req-9b6196f5-7a15-48ef-8aff-378ad1b9d306 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Processing event network-vif-plugged-93527468-4177-4f9e-a801-345f54dbe456 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.391 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.392 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5224MB free_disk=72.2397232055664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.392 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.393 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.423 188707 DEBUG nova.network.neutron [req-0499d880-3750-485b-b2a9-d96f79effcc3 req-5a3ca46f-7a42-452c-856c-a14f1cb8f412 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updated VIF entry in instance network info cache for port 93527468-4177-4f9e-a801-345f54dbe456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.424 188707 DEBUG nova.network.neutron [req-0499d880-3750-485b-b2a9-d96f79effcc3 req-5a3ca46f-7a42-452c-856c-a14f1cb8f412 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updating instance_info_cache with network_info: [{"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.455 188707 DEBUG oslo_concurrency.lockutils [req-0499d880-3750-485b-b2a9-d96f79effcc3 req-5a3ca46f-7a42-452c-856c-a14f1cb8f412 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.479 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.480 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.481 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.481 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.543 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.558 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.601 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.601 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.624 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948012.623954, 4e6fb5f9-248e-440a-9cd9-472a05ab19ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.625 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] VM Started (Lifecycle Event)
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.628 188707 DEBUG nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.637 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.644 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.648 188707 INFO nova.virt.libvirt.driver [-] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Instance spawned successfully.
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.648 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.654 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.676 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.676 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.677 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.679 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.680 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.681 188707 DEBUG nova.virt.libvirt.driver [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.689 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.690 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948012.6241891, 4e6fb5f9-248e-440a-9cd9-472a05ab19ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.691 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] VM Paused (Lifecycle Event)
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.726 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.732 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948012.6335201, 4e6fb5f9-248e-440a-9cd9-472a05ab19ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.733 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] VM Resumed (Lifecycle Event)
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.745 188707 INFO nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Took 4.76 seconds to spawn the instance on the hypervisor.
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.746 188707 DEBUG nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.756 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.762 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.815 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.842 188707 INFO nova.compute.manager [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Took 5.85 seconds to build instance.
Feb 24 15:46:52 compute-0 nova_compute[188703]: 2026-02-24 15:46:52.861 188707 DEBUG oslo_concurrency.lockutils [None req-27e068f5-4526-4e24-af67-ce238e7f3724 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:54 compute-0 nova_compute[188703]: 2026-02-24 15:46:54.325 188707 DEBUG nova.compute.manager [req-e84646a7-d394-4d4d-9c21-6779f9c27546 req-edbdbe2b-aca0-4b22-8d7f-f3d075892084 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Received event network-vif-plugged-93527468-4177-4f9e-a801-345f54dbe456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:46:54 compute-0 nova_compute[188703]: 2026-02-24 15:46:54.327 188707 DEBUG oslo_concurrency.lockutils [req-e84646a7-d394-4d4d-9c21-6779f9c27546 req-edbdbe2b-aca0-4b22-8d7f-f3d075892084 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:54 compute-0 nova_compute[188703]: 2026-02-24 15:46:54.327 188707 DEBUG oslo_concurrency.lockutils [req-e84646a7-d394-4d4d-9c21-6779f9c27546 req-edbdbe2b-aca0-4b22-8d7f-f3d075892084 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:54 compute-0 nova_compute[188703]: 2026-02-24 15:46:54.328 188707 DEBUG oslo_concurrency.lockutils [req-e84646a7-d394-4d4d-9c21-6779f9c27546 req-edbdbe2b-aca0-4b22-8d7f-f3d075892084 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:54 compute-0 nova_compute[188703]: 2026-02-24 15:46:54.329 188707 DEBUG nova.compute.manager [req-e84646a7-d394-4d4d-9c21-6779f9c27546 req-edbdbe2b-aca0-4b22-8d7f-f3d075892084 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] No waiting events found dispatching network-vif-plugged-93527468-4177-4f9e-a801-345f54dbe456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 15:46:54 compute-0 nova_compute[188703]: 2026-02-24 15:46:54.330 188707 WARNING nova.compute.manager [req-e84646a7-d394-4d4d-9c21-6779f9c27546 req-edbdbe2b-aca0-4b22-8d7f-f3d075892084 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Received unexpected event network-vif-plugged-93527468-4177-4f9e-a801-345f54dbe456 for instance with vm_state active and task_state None.
Feb 24 15:46:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:55.703 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:46:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:55.704 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:46:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:46:55.705 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:46:56 compute-0 nova_compute[188703]: 2026-02-24 15:46:56.040 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:56 compute-0 podman[242816]: 2026-02-24 15:46:56.135312669 +0000 UTC m=+0.084300541 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:46:57 compute-0 nova_compute[188703]: 2026-02-24 15:46:57.718 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:46:59 compute-0 podman[204685]: time="2026-02-24T15:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:46:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:46:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4365 "" "Go-http-client/1.1"
Feb 24 15:47:01 compute-0 nova_compute[188703]: 2026-02-24 15:47:01.044 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:01 compute-0 sshd-session[242815]: Invalid user ubnt from 80.94.95.116 port 63136
Feb 24 15:47:01 compute-0 sshd-session[242815]: Connection closed by invalid user ubnt 80.94.95.116 port 63136 [preauth]
Feb 24 15:47:01 compute-0 openstack_network_exporter[207830]: ERROR   15:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:47:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:47:01 compute-0 openstack_network_exporter[207830]: ERROR   15:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:47:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:47:02 compute-0 nova_compute[188703]: 2026-02-24 15:47:02.721 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:06 compute-0 nova_compute[188703]: 2026-02-24 15:47:06.047 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:07 compute-0 nova_compute[188703]: 2026-02-24 15:47:07.724 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:09 compute-0 podman[242842]: 2026-02-24 15:47:09.130163359 +0000 UTC m=+0.077603050 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Feb 24 15:47:09 compute-0 podman[242841]: 2026-02-24 15:47:09.132763065 +0000 UTC m=+0.091417641 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:47:11 compute-0 nova_compute[188703]: 2026-02-24 15:47:11.051 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:11 compute-0 podman[242882]: 2026-02-24 15:47:11.111643995 +0000 UTC m=+0.074128723 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Feb 24 15:47:12 compute-0 nova_compute[188703]: 2026-02-24 15:47:12.731 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:13 compute-0 podman[242901]: 2026-02-24 15:47:13.16165504 +0000 UTC m=+0.110215439 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Feb 24 15:47:16 compute-0 nova_compute[188703]: 2026-02-24 15:47:16.054 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:16 compute-0 podman[242920]: 2026-02-24 15:47:16.124765547 +0000 UTC m=+0.077751166 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.buildah.version=1.33.7, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1770267347, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Feb 24 15:47:17 compute-0 nova_compute[188703]: 2026-02-24 15:47:17.732 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:21 compute-0 nova_compute[188703]: 2026-02-24 15:47:21.058 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:21 compute-0 podman[242942]: 2026-02-24 15:47:21.151866793 +0000 UTC m=+0.106966696 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 15:47:21 compute-0 podman[242943]: 2026-02-24 15:47:21.217861779 +0000 UTC m=+0.173877146 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 15:47:21 compute-0 ovn_controller[98701]: 2026-02-24T15:47:21Z|00039|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb 24 15:47:22 compute-0 nova_compute[188703]: 2026-02-24 15:47:22.735 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:24 compute-0 ovn_controller[98701]: 2026-02-24T15:47:24Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:32:ce 192.168.0.224
Feb 24 15:47:24 compute-0 ovn_controller[98701]: 2026-02-24T15:47:24Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:32:ce 192.168.0.224
Feb 24 15:47:26 compute-0 nova_compute[188703]: 2026-02-24 15:47:26.060 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:27 compute-0 podman[243000]: 2026-02-24 15:47:27.160047328 +0000 UTC m=+0.109807649 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:47:27 compute-0 nova_compute[188703]: 2026-02-24 15:47:27.736 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:29 compute-0 podman[204685]: time="2026-02-24T15:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:47:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:47:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4361 "" "Go-http-client/1.1"
Feb 24 15:47:31 compute-0 nova_compute[188703]: 2026-02-24 15:47:31.065 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:31 compute-0 openstack_network_exporter[207830]: ERROR   15:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:47:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:47:31 compute-0 openstack_network_exporter[207830]: ERROR   15:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:47:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:47:32 compute-0 nova_compute[188703]: 2026-02-24 15:47:32.739 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:36 compute-0 nova_compute[188703]: 2026-02-24 15:47:36.069 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:37 compute-0 nova_compute[188703]: 2026-02-24 15:47:37.742 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:39 compute-0 nova_compute[188703]: 2026-02-24 15:47:39.288 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:39 compute-0 nova_compute[188703]: 2026-02-24 15:47:39.333 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Triggering sync for uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 24 15:47:39 compute-0 nova_compute[188703]: 2026-02-24 15:47:39.333 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Triggering sync for uuid 4e6fb5f9-248e-440a-9cd9-472a05ab19ee _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 24 15:47:39 compute-0 nova_compute[188703]: 2026-02-24 15:47:39.334 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:47:39 compute-0 nova_compute[188703]: 2026-02-24 15:47:39.335 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:47:39 compute-0 nova_compute[188703]: 2026-02-24 15:47:39.335 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:47:39 compute-0 nova_compute[188703]: 2026-02-24 15:47:39.336 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:47:39 compute-0 nova_compute[188703]: 2026-02-24 15:47:39.398 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:47:39 compute-0 nova_compute[188703]: 2026-02-24 15:47:39.400 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.828 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.828 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.828 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.840 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'name': 'test_0', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.844 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 15:47:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:39.846 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4e6fb5f9-248e-440a-9cd9-472a05ab19ee -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 15:47:40 compute-0 podman[243025]: 2026-02-24 15:47:40.093208072 +0000 UTC m=+0.056200699 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:47:40 compute-0 podman[243026]: 2026-02-24 15:47:40.118455193 +0000 UTC m=+0.078408112 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.044 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 24 Feb 2026 15:47:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-dee88663-11e7-4a30-a8d5-b55e6796e006 x-openstack-request-id: req-dee88663-11e7-4a30-a8d5-b55e6796e006 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.044 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4e6fb5f9-248e-440a-9cd9-472a05ab19ee", "name": "vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2", "status": "ACTIVE", "tenant_id": "4407f5b870e145d8917119ad928717e8", "user_id": "bd338d866e3242aeb685fec99c451955", "metadata": {"metering.server_group": "105127c2-20fd-4471-8609-2ac19fea2fd2"}, "hostId": "781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62", "image": {"id": "de6b8fc8-e0dc-4bbf-943b-e6ac6027af11", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/de6b8fc8-e0dc-4bbf-943b-e6ac6027af11"}]}, "flavor": {"id": "521ca388-0b2e-40c6-bb06-118d4ed86b49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/521ca388-0b2e-40c6-bb06-118d4ed86b49"}]}, "created": "2026-02-24T15:46:45Z", "updated": "2026-02-24T15:46:52Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.224", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3a:32:ce"}, {"version": 4, "addr": "192.168.122.178", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3a:32:ce"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4e6fb5f9-248e-440a-9cd9-472a05ab19ee"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4e6fb5f9-248e-440a-9cd9-472a05ab19ee"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-24T15:46:52.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.045 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4e6fb5f9-248e-440a-9cd9-472a05ab19ee used request id req-dee88663-11e7-4a30-a8d5-b55e6796e006 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.046 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4e6fb5f9-248e-440a-9cd9-472a05ab19ee', 'name': 'vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.047 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T15:47:41.048479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 nova_compute[188703]: 2026-02-24 15:47:41.073 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.092 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.129 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/memory.usage volume: 49.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.130 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T15:47:41.131020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.171 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.172 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.172 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.207 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.208 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.208 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.210 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T15:47:41.211240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.217 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.223 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4e6fb5f9-248e-440a-9cd9-472a05ab19ee / tap93527468-41 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.223 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.224 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.225 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.225 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.225 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.225 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.226 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.226 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.227 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.227 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.227 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T15:47:41.225827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.228 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes.delta volume: 1878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.229 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T15:47:41.228614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.230 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.231 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.231 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.231 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.233 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.233 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.233 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.234 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T15:47:41.231341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.234 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T15:47:41.233921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.234 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.235 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.235 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.236 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.236 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.237 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.237 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.237 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.237 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.238 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T15:47:41.238040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.352 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.352 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.353 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.465 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.466 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.466 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.468 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.469 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T15:47:41.468836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.470 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.470 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.471 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.471 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.471 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 691853245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.472 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124156741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.472 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124375245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.473 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 787461847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.473 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 157289336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.474 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 260856202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T15:47:41.471627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.475 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.476 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/cpu volume: 33280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.476 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/cpu volume: 31270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.477 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.478 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.478 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.478 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.478 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.479 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.479 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.479 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.480 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.480 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.481 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.483 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T15:47:41.476259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.484 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.484 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.485 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.486 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.486 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.487 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.488 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T15:47:41.478949) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.489 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.489 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.490 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.490 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.491 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.491 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.493 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.493 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T15:47:41.483662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.494 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 2170641399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.494 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 13738713 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.495 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.495 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 2545305171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.495 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 18006907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T15:47:41.486378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.496 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T15:47:41.488994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T15:47:41.493934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.499 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.499 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T15:47:41.498983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.500 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.501 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.501 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.501 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.503 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.504 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2>]
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.504 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.505 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.506 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-24T15:47:41.503536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.506 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T15:47:41.505516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.507 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.507 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.508 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.510 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.511 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.512 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T15:47:41.510450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T15:47:41.513337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.513 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes.delta volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.513 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.514 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.514 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.515 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.516 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T15:47:41.514844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.517 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.517 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.517 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2>]
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T15:47:41.516274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.518 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.518 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-24T15:47:41.517508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.518 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.518 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T15:47:41.518479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.519 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.520 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.520 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T15:47:41.519933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:47:41.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:47:42 compute-0 podman[243068]: 2026-02-24 15:47:42.140700183 +0000 UTC m=+0.097368444 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 24 15:47:42 compute-0 nova_compute[188703]: 2026-02-24 15:47:42.747 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:44 compute-0 podman[243087]: 2026-02-24 15:47:44.188258995 +0000 UTC m=+0.127401255 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 24 15:47:44 compute-0 nova_compute[188703]: 2026-02-24 15:47:44.990 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:44 compute-0 nova_compute[188703]: 2026-02-24 15:47:44.991 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:47:44 compute-0 nova_compute[188703]: 2026-02-24 15:47:44.991 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:47:45 compute-0 nova_compute[188703]: 2026-02-24 15:47:45.433 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:47:45 compute-0 nova_compute[188703]: 2026-02-24 15:47:45.434 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:47:45 compute-0 nova_compute[188703]: 2026-02-24 15:47:45.435 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:47:45 compute-0 nova_compute[188703]: 2026-02-24 15:47:45.436 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:47:46 compute-0 nova_compute[188703]: 2026-02-24 15:47:46.076 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:46 compute-0 nova_compute[188703]: 2026-02-24 15:47:46.871 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:47:46 compute-0 nova_compute[188703]: 2026-02-24 15:47:46.892 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:47:46 compute-0 nova_compute[188703]: 2026-02-24 15:47:46.893 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:47:46 compute-0 nova_compute[188703]: 2026-02-24 15:47:46.894 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:46 compute-0 nova_compute[188703]: 2026-02-24 15:47:46.895 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:46 compute-0 nova_compute[188703]: 2026-02-24 15:47:46.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:47 compute-0 podman[243107]: 2026-02-24 15:47:47.152038599 +0000 UTC m=+0.110815414 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, version=9.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 24 15:47:47 compute-0 nova_compute[188703]: 2026-02-24 15:47:47.750 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:47 compute-0 nova_compute[188703]: 2026-02-24 15:47:47.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:49 compute-0 nova_compute[188703]: 2026-02-24 15:47:49.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:49 compute-0 nova_compute[188703]: 2026-02-24 15:47:49.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:49 compute-0 nova_compute[188703]: 2026-02-24 15:47:49.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:47:50 compute-0 nova_compute[188703]: 2026-02-24 15:47:50.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:51 compute-0 nova_compute[188703]: 2026-02-24 15:47:51.080 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:52 compute-0 podman[243131]: 2026-02-24 15:47:52.146312633 +0000 UTC m=+0.090665263 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 24 15:47:52 compute-0 podman[243130]: 2026-02-24 15:47:52.152473329 +0000 UTC m=+0.100624855 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 15:47:52 compute-0 nova_compute[188703]: 2026-02-24 15:47:52.753 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:52 compute-0 nova_compute[188703]: 2026-02-24 15:47:52.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:47:52 compute-0 nova_compute[188703]: 2026-02-24 15:47:52.968 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:47:52 compute-0 nova_compute[188703]: 2026-02-24 15:47:52.969 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:47:52 compute-0 nova_compute[188703]: 2026-02-24 15:47:52.970 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:47:52 compute-0 nova_compute[188703]: 2026-02-24 15:47:52.970 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.051 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.101 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.103 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.169 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.170 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.221 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.222 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.274 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.281 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.337 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.338 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.428 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.430 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.511 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.513 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.595 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.954 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.957 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5045MB free_disk=72.21736907958984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.958 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:47:53 compute-0 nova_compute[188703]: 2026-02-24 15:47:53.959 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:47:54 compute-0 nova_compute[188703]: 2026-02-24 15:47:54.040 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:47:54 compute-0 nova_compute[188703]: 2026-02-24 15:47:54.040 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:47:54 compute-0 nova_compute[188703]: 2026-02-24 15:47:54.041 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:47:54 compute-0 nova_compute[188703]: 2026-02-24 15:47:54.041 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:47:54 compute-0 nova_compute[188703]: 2026-02-24 15:47:54.115 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:47:54 compute-0 nova_compute[188703]: 2026-02-24 15:47:54.225 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:47:54 compute-0 nova_compute[188703]: 2026-02-24 15:47:54.254 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:47:54 compute-0 nova_compute[188703]: 2026-02-24 15:47:54.254 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:47:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:47:55.705 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:47:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:47:55.705 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:47:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:47:55.706 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:47:56 compute-0 nova_compute[188703]: 2026-02-24 15:47:56.084 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:57 compute-0 nova_compute[188703]: 2026-02-24 15:47:57.755 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:47:59 compute-0 podman[243196]: 2026-02-24 15:47:59.186045214 +0000 UTC m=+0.065887874 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 15:47:59 compute-0 podman[204685]: time="2026-02-24T15:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:47:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:47:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Feb 24 15:48:01 compute-0 nova_compute[188703]: 2026-02-24 15:48:01.087 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:01 compute-0 openstack_network_exporter[207830]: ERROR   15:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:48:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:48:01 compute-0 openstack_network_exporter[207830]: ERROR   15:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:48:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:48:02 compute-0 nova_compute[188703]: 2026-02-24 15:48:02.758 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:06 compute-0 nova_compute[188703]: 2026-02-24 15:48:06.089 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:07 compute-0 nova_compute[188703]: 2026-02-24 15:48:07.761 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:11 compute-0 nova_compute[188703]: 2026-02-24 15:48:11.091 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:11 compute-0 podman[243220]: 2026-02-24 15:48:11.123221481 +0000 UTC m=+0.078980016 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:48:11 compute-0 podman[243221]: 2026-02-24 15:48:11.16846531 +0000 UTC m=+0.122717227 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:48:12 compute-0 nova_compute[188703]: 2026-02-24 15:48:12.764 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:13 compute-0 podman[243263]: 2026-02-24 15:48:13.162333549 +0000 UTC m=+0.115157044 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Feb 24 15:48:14 compute-0 podman[243282]: 2026-02-24 15:48:14.769750408 +0000 UTC m=+0.082129776 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, release=1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release-0.7.12=, name=ubi9, vcs-type=git)
Feb 24 15:48:16 compute-0 nova_compute[188703]: 2026-02-24 15:48:16.095 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:17 compute-0 nova_compute[188703]: 2026-02-24 15:48:17.767 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:18 compute-0 podman[243302]: 2026-02-24 15:48:18.182651824 +0000 UTC m=+0.137757459 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, release=1770267347, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git)
Feb 24 15:48:21 compute-0 nova_compute[188703]: 2026-02-24 15:48:21.099 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:22 compute-0 nova_compute[188703]: 2026-02-24 15:48:22.771 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:23 compute-0 podman[243325]: 2026-02-24 15:48:23.111707822 +0000 UTC m=+0.074984196 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 15:48:23 compute-0 podman[243326]: 2026-02-24 15:48:23.146377302 +0000 UTC m=+0.105823438 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 24 15:48:26 compute-0 nova_compute[188703]: 2026-02-24 15:48:26.102 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:27 compute-0 nova_compute[188703]: 2026-02-24 15:48:27.774 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:29 compute-0 podman[204685]: time="2026-02-24T15:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:48:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:48:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Feb 24 15:48:30 compute-0 podman[243368]: 2026-02-24 15:48:30.116306863 +0000 UTC m=+0.070543898 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:48:31 compute-0 nova_compute[188703]: 2026-02-24 15:48:31.105 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:31 compute-0 openstack_network_exporter[207830]: ERROR   15:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:48:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:48:31 compute-0 openstack_network_exporter[207830]: ERROR   15:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:48:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:48:32 compute-0 nova_compute[188703]: 2026-02-24 15:48:32.778 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:36 compute-0 nova_compute[188703]: 2026-02-24 15:48:36.107 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:37 compute-0 nova_compute[188703]: 2026-02-24 15:48:37.781 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:41 compute-0 nova_compute[188703]: 2026-02-24 15:48:41.112 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:42 compute-0 podman[243392]: 2026-02-24 15:48:42.140808126 +0000 UTC m=+0.097155682 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:48:42 compute-0 podman[243393]: 2026-02-24 15:48:42.183429361 +0000 UTC m=+0.135171479 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 24 15:48:42 compute-0 sshd-session[243430]: Connection closed by authenticating user root 64.236.161.24 port 46112 [preauth]
Feb 24 15:48:42 compute-0 nova_compute[188703]: 2026-02-24 15:48:42.786 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:44 compute-0 podman[243432]: 2026-02-24 15:48:44.156123244 +0000 UTC m=+0.112625617 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 24 15:48:45 compute-0 podman[243450]: 2026-02-24 15:48:45.154972081 +0000 UTC m=+0.112800953 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.component=ubi9-container, vcs-type=git, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=)
Feb 24 15:48:46 compute-0 nova_compute[188703]: 2026-02-24 15:48:46.117 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:47 compute-0 nova_compute[188703]: 2026-02-24 15:48:47.255 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:47 compute-0 nova_compute[188703]: 2026-02-24 15:48:47.258 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:48:47 compute-0 nova_compute[188703]: 2026-02-24 15:48:47.749 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:48:47 compute-0 nova_compute[188703]: 2026-02-24 15:48:47.750 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:48:47 compute-0 nova_compute[188703]: 2026-02-24 15:48:47.751 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:48:47 compute-0 nova_compute[188703]: 2026-02-24 15:48:47.786 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:49 compute-0 podman[243475]: 2026-02-24 15:48:49.145543783 +0000 UTC m=+0.103822286 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.7, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 24 15:48:49 compute-0 nova_compute[188703]: 2026-02-24 15:48:49.582 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updating instance_info_cache with network_info: [{"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:48:49 compute-0 nova_compute[188703]: 2026-02-24 15:48:49.611 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:48:49 compute-0 nova_compute[188703]: 2026-02-24 15:48:49.612 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:48:49 compute-0 nova_compute[188703]: 2026-02-24 15:48:49.612 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:49 compute-0 nova_compute[188703]: 2026-02-24 15:48:49.613 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:49 compute-0 nova_compute[188703]: 2026-02-24 15:48:49.613 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:49 compute-0 nova_compute[188703]: 2026-02-24 15:48:49.613 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:50 compute-0 nova_compute[188703]: 2026-02-24 15:48:50.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:50 compute-0 nova_compute[188703]: 2026-02-24 15:48:50.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:50 compute-0 nova_compute[188703]: 2026-02-24 15:48:50.973 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:50 compute-0 nova_compute[188703]: 2026-02-24 15:48:50.974 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:48:51 compute-0 nova_compute[188703]: 2026-02-24 15:48:51.121 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:51 compute-0 nova_compute[188703]: 2026-02-24 15:48:51.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:52 compute-0 nova_compute[188703]: 2026-02-24 15:48:52.790 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:54 compute-0 podman[243497]: 2026-02-24 15:48:54.135554638 +0000 UTC m=+0.097618824 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 24 15:48:54 compute-0 podman[243498]: 2026-02-24 15:48:54.184308622 +0000 UTC m=+0.138056889 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 24 15:48:54 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 24 15:48:54 compute-0 nova_compute[188703]: 2026-02-24 15:48:54.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:48:54 compute-0 nova_compute[188703]: 2026-02-24 15:48:54.973 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:48:54 compute-0 nova_compute[188703]: 2026-02-24 15:48:54.974 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:48:54 compute-0 nova_compute[188703]: 2026-02-24 15:48:54.975 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:48:54 compute-0 nova_compute[188703]: 2026-02-24 15:48:54.975 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.047 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.110 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.111 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.173 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.175 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.227 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.230 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.286 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.298 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.357 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.359 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.423 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.424 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.514 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.516 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.564 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:48:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:48:55.706 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:48:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:48:55.706 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:48:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:48:55.707 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.916 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.918 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5033MB free_disk=72.21736907958984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.919 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:48:55 compute-0 nova_compute[188703]: 2026-02-24 15:48:55.920 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:48:56 compute-0 nova_compute[188703]: 2026-02-24 15:48:56.026 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:48:56 compute-0 nova_compute[188703]: 2026-02-24 15:48:56.028 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:48:56 compute-0 nova_compute[188703]: 2026-02-24 15:48:56.028 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:48:56 compute-0 nova_compute[188703]: 2026-02-24 15:48:56.029 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:48:56 compute-0 nova_compute[188703]: 2026-02-24 15:48:56.109 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:48:56 compute-0 nova_compute[188703]: 2026-02-24 15:48:56.123 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:56 compute-0 nova_compute[188703]: 2026-02-24 15:48:56.127 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:48:56 compute-0 nova_compute[188703]: 2026-02-24 15:48:56.130 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:48:56 compute-0 nova_compute[188703]: 2026-02-24 15:48:56.131 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:48:57 compute-0 nova_compute[188703]: 2026-02-24 15:48:57.794 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:48:59 compute-0 podman[204685]: time="2026-02-24T15:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:48:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:48:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Feb 24 15:49:01 compute-0 nova_compute[188703]: 2026-02-24 15:49:01.126 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:01 compute-0 podman[243568]: 2026-02-24 15:49:01.145647522 +0000 UTC m=+0.100200625 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:49:01 compute-0 openstack_network_exporter[207830]: ERROR   15:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:49:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:49:01 compute-0 openstack_network_exporter[207830]: ERROR   15:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:49:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:49:02 compute-0 nova_compute[188703]: 2026-02-24 15:49:02.796 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:06 compute-0 nova_compute[188703]: 2026-02-24 15:49:06.129 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:07 compute-0 nova_compute[188703]: 2026-02-24 15:49:07.799 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:11 compute-0 nova_compute[188703]: 2026-02-24 15:49:11.135 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:12 compute-0 nova_compute[188703]: 2026-02-24 15:49:12.802 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:13 compute-0 podman[243591]: 2026-02-24 15:49:13.136967431 +0000 UTC m=+0.088353810 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 15:49:13 compute-0 podman[243592]: 2026-02-24 15:49:13.174920978 +0000 UTC m=+0.120122036 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 15:49:14 compute-0 podman[243630]: 2026-02-24 15:49:14.805499612 +0000 UTC m=+0.134376699 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:49:16 compute-0 podman[243651]: 2026-02-24 15:49:16.126506746 +0000 UTC m=+0.085991314 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30)
Feb 24 15:49:16 compute-0 nova_compute[188703]: 2026-02-24 15:49:16.137 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:17 compute-0 nova_compute[188703]: 2026-02-24 15:49:17.807 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:20 compute-0 podman[243673]: 2026-02-24 15:49:20.126028635 +0000 UTC m=+0.080489742 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 24 15:49:21 compute-0 nova_compute[188703]: 2026-02-24 15:49:21.140 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:22 compute-0 nova_compute[188703]: 2026-02-24 15:49:22.806 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:25 compute-0 podman[243692]: 2026-02-24 15:49:25.169275088 +0000 UTC m=+0.122927493 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 24 15:49:25 compute-0 podman[243693]: 2026-02-24 15:49:25.216240274 +0000 UTC m=+0.168772278 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 24 15:49:26 compute-0 nova_compute[188703]: 2026-02-24 15:49:26.143 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:27 compute-0 nova_compute[188703]: 2026-02-24 15:49:27.810 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:29 compute-0 podman[204685]: time="2026-02-24T15:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:49:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:49:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4365 "" "Go-http-client/1.1"
Feb 24 15:49:31 compute-0 nova_compute[188703]: 2026-02-24 15:49:31.147 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:31 compute-0 openstack_network_exporter[207830]: ERROR   15:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:49:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:49:31 compute-0 openstack_network_exporter[207830]: ERROR   15:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:49:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:49:32 compute-0 podman[243736]: 2026-02-24 15:49:32.138780063 +0000 UTC m=+0.092286967 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:49:32 compute-0 nova_compute[188703]: 2026-02-24 15:49:32.811 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:36 compute-0 nova_compute[188703]: 2026-02-24 15:49:36.150 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:37 compute-0 nova_compute[188703]: 2026-02-24 15:49:37.813 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.828 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.829 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.829 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.830 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.840 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'name': 'test_0', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.846 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4e6fb5f9-248e-440a-9cd9-472a05ab19ee', 'name': 'vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.846 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T15:49:39.847586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.879 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.911 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/memory.usage volume: 49.05859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.913 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T15:49:39.913693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.946 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.947 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.947 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.976 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.976 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.977 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.978 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T15:49:39.979625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.985 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.989 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.990 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.990 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.990 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.990 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T15:49:39.990431) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.992 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.992 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T15:49:39.992342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.993 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes.delta volume: 3363 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.994 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.994 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.994 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T15:49:39.994365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.996 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.996 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.996 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.997 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.997 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.997 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.998 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T15:49:39.996323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.999 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:39.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.000 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T15:49:39.999774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.081 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.082 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.083 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.190 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.191 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.191 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.193 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.193 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.194 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T15:49:40.193635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.196 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.197 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 691853245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T15:49:40.197014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.198 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124156741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.199 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124375245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.199 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 791899769 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.200 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 157289336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.201 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 260856202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.202 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.203 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/cpu volume: 34960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.204 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/cpu volume: 119240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T15:49:40.203324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T15:49:40.207060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.207 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.208 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.208 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.208 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.209 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.209 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.210 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T15:49:40.210709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.210 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.211 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.212 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.212 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.213 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.213 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.214 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.214 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.214 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.214 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.214 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.214 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.214 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.215 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.215 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.215 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.216 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.216 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.217 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.217 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.218 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 2170641399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.218 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 13738713 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.218 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.219 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 2717006851 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.219 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 18006907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T15:49:40.212903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T15:49:40.214810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T15:49:40.218046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.220 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.221 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.221 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.221 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.221 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.222 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.222 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T15:49:40.222054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.223 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.223 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.224 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.224 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.225 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.225 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.225 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.225 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.226 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.226 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.226 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.227 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.227 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.227 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T15:49:40.226131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.229 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.229 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.229 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.229 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.230 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.230 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.230 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.231 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.231 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.231 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T15:49:40.229804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T15:49:40.231740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.232 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.232 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes.delta volume: 2788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.233 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.233 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.233 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.233 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.233 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.234 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T15:49:40.234025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.235 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.235 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.235 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.236 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.236 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.237 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.237 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.237 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.237 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T15:49:40.235958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T15:49:40.237821) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.238 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.238 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.239 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.239 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.239 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.239 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.240 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T15:49:40.240159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.240 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.241 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes volume: 4694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.241 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.242 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.242 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.242 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.242 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.242 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.243 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.243 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.243 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.243 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.243 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.243 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.243 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.244 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.244 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.244 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.244 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.244 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.244 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.244 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:49:40.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:49:41 compute-0 nova_compute[188703]: 2026-02-24 15:49:41.155 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:42 compute-0 nova_compute[188703]: 2026-02-24 15:49:42.815 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:44 compute-0 podman[243762]: 2026-02-24 15:49:44.142348238 +0000 UTC m=+0.088106212 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 15:49:44 compute-0 podman[243761]: 2026-02-24 15:49:44.176029027 +0000 UTC m=+0.123472207 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:49:45 compute-0 podman[243803]: 2026-02-24 15:49:45.187636936 +0000 UTC m=+0.135325385 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Feb 24 15:49:46 compute-0 nova_compute[188703]: 2026-02-24 15:49:46.157 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:47 compute-0 podman[243823]: 2026-02-24 15:49:47.180840794 +0000 UTC m=+0.130920582 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-type=git, architecture=x86_64)
Feb 24 15:49:47 compute-0 nova_compute[188703]: 2026-02-24 15:49:47.819 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:49 compute-0 nova_compute[188703]: 2026-02-24 15:49:49.131 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:49:49 compute-0 nova_compute[188703]: 2026-02-24 15:49:49.132 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:49:49 compute-0 nova_compute[188703]: 2026-02-24 15:49:49.132 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:49:50 compute-0 nova_compute[188703]: 2026-02-24 15:49:50.458 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:49:50 compute-0 nova_compute[188703]: 2026-02-24 15:49:50.460 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:49:50 compute-0 nova_compute[188703]: 2026-02-24 15:49:50.461 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:49:50 compute-0 nova_compute[188703]: 2026-02-24 15:49:50.462 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:49:51 compute-0 nova_compute[188703]: 2026-02-24 15:49:51.161 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:51 compute-0 podman[243845]: 2026-02-24 15:49:51.216021446 +0000 UTC m=+0.141292099 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1770267347, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., version=9.7, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=openstack_network_exporter)
Feb 24 15:49:52 compute-0 nova_compute[188703]: 2026-02-24 15:49:52.821 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.489 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.512 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.513 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.514 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.514 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.514 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.515 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.515 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.516 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:49:54 compute-0 nova_compute[188703]: 2026-02-24 15:49:54.516 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:49:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:49:55.707 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:49:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:49:55.708 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:49:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:49:55.709 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:49:56 compute-0 nova_compute[188703]: 2026-02-24 15:49:56.165 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:56 compute-0 podman[243864]: 2026-02-24 15:49:56.193898916 +0000 UTC m=+0.140045015 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0)
Feb 24 15:49:56 compute-0 podman[243865]: 2026-02-24 15:49:56.224959644 +0000 UTC m=+0.169018315 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 15:49:56 compute-0 nova_compute[188703]: 2026-02-24 15:49:56.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:49:56 compute-0 nova_compute[188703]: 2026-02-24 15:49:56.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:49:56 compute-0 nova_compute[188703]: 2026-02-24 15:49:56.986 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:49:56 compute-0 nova_compute[188703]: 2026-02-24 15:49:56.986 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:49:56 compute-0 nova_compute[188703]: 2026-02-24 15:49:56.987 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:49:56 compute-0 nova_compute[188703]: 2026-02-24 15:49:56.987 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.085 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.152 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.153 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.221 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.229 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.294 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.295 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.372 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.380 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.454 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.457 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.546 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.548 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.637 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.638 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.703 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:49:57 compute-0 nova_compute[188703]: 2026-02-24 15:49:57.824 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.235 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.237 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5037MB free_disk=72.21736907958984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.238 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.238 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.385 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.386 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.386 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.387 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.458 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.478 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.483 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:49:58 compute-0 nova_compute[188703]: 2026-02-24 15:49:58.484 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:49:59 compute-0 podman[204685]: time="2026-02-24T15:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:49:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:49:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Feb 24 15:50:01 compute-0 nova_compute[188703]: 2026-02-24 15:50:01.169 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:01 compute-0 openstack_network_exporter[207830]: ERROR   15:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:50:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:50:01 compute-0 openstack_network_exporter[207830]: ERROR   15:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:50:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:50:02 compute-0 nova_compute[188703]: 2026-02-24 15:50:02.826 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:03 compute-0 podman[243936]: 2026-02-24 15:50:03.116464817 +0000 UTC m=+0.076823731 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:50:03 compute-0 sshd-session[243934]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 15:50:06 compute-0 nova_compute[188703]: 2026-02-24 15:50:06.172 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:07 compute-0 nova_compute[188703]: 2026-02-24 15:50:07.828 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:11 compute-0 nova_compute[188703]: 2026-02-24 15:50:11.178 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:12 compute-0 nova_compute[188703]: 2026-02-24 15:50:12.832 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:14 compute-0 podman[243959]: 2026-02-24 15:50:14.80754788 +0000 UTC m=+0.113728148 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:50:14 compute-0 podman[243960]: 2026-02-24 15:50:14.831600005 +0000 UTC m=+0.128752053 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 24 15:50:16 compute-0 podman[243999]: 2026-02-24 15:50:16.129312015 +0000 UTC m=+0.083411951 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 24 15:50:16 compute-0 nova_compute[188703]: 2026-02-24 15:50:16.182 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:17 compute-0 nova_compute[188703]: 2026-02-24 15:50:17.837 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:18 compute-0 podman[244019]: 2026-02-24 15:50:18.188945616 +0000 UTC m=+0.136191448 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Feb 24 15:50:21 compute-0 nova_compute[188703]: 2026-02-24 15:50:21.185 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:22 compute-0 podman[244039]: 2026-02-24 15:50:22.150380875 +0000 UTC m=+0.106886830 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, name=ubi9/ubi-minimal, distribution-scope=public, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 15:50:22 compute-0 nova_compute[188703]: 2026-02-24 15:50:22.838 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:26 compute-0 nova_compute[188703]: 2026-02-24 15:50:26.188 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:27 compute-0 podman[244058]: 2026-02-24 15:50:27.161602075 +0000 UTC m=+0.111163638 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute)
Feb 24 15:50:27 compute-0 podman[244059]: 2026-02-24 15:50:27.20055155 +0000 UTC m=+0.146046511 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 24 15:50:27 compute-0 nova_compute[188703]: 2026-02-24 15:50:27.840 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:29 compute-0 podman[204685]: time="2026-02-24T15:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:50:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:50:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Feb 24 15:50:31 compute-0 nova_compute[188703]: 2026-02-24 15:50:31.191 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:31 compute-0 openstack_network_exporter[207830]: ERROR   15:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:50:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:50:31 compute-0 openstack_network_exporter[207830]: ERROR   15:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:50:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:50:32 compute-0 nova_compute[188703]: 2026-02-24 15:50:32.842 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:34 compute-0 podman[244105]: 2026-02-24 15:50:34.099917689 +0000 UTC m=+0.059057740 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:50:35 compute-0 nova_compute[188703]: 2026-02-24 15:50:35.457 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:35.456 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:50:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:35.459 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 15:50:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:35.459 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:50:36 compute-0 nova_compute[188703]: 2026-02-24 15:50:36.193 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:37 compute-0 nova_compute[188703]: 2026-02-24 15:50:37.847 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:41 compute-0 nova_compute[188703]: 2026-02-24 15:50:41.197 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:42 compute-0 nova_compute[188703]: 2026-02-24 15:50:42.852 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.217 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.217 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.238 188707 DEBUG nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.332 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.334 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.349 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.350 188707 INFO nova.compute.claims [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Claim successful on node compute-0.ctlplane.example.com
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.518 188707 DEBUG nova.compute.provider_tree [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.534 188707 DEBUG nova.scheduler.client.report [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.557 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.558 188707 DEBUG nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.600 188707 DEBUG nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.600 188707 DEBUG nova.network.neutron [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.628 188707 INFO nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.708 188707 DEBUG nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.798 188707 DEBUG nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.801 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.802 188707 INFO nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Creating image(s)
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.803 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.804 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.805 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.827 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.907 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.909 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.909 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.926 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.988 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:44 compute-0 nova_compute[188703]: 2026-02-24 15:50:44.989 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759,backing_fmt=raw /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.026 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759,backing_fmt=raw /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.028 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.029 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.107 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.108 188707 DEBUG nova.virt.disk.api [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Checking if we can resize image /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.108 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:45 compute-0 podman[244137]: 2026-02-24 15:50:45.129548985 +0000 UTC m=+0.087912687 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:50:45 compute-0 podman[244138]: 2026-02-24 15:50:45.141916175 +0000 UTC m=+0.101509021 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.179 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.181 188707 DEBUG nova.virt.disk.api [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Cannot resize image /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.182 188707 DEBUG nova.objects.instance [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'migration_context' on Instance uuid 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.196 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.197 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.198 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.224 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.281 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.283 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.283 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.300 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.357 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.360 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.413 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.415 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.416 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.503 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.505 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.505 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Ensure instance console log exists: /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.506 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.507 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.508 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.899 188707 DEBUG nova.network.neutron [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Successfully updated port: 7d447097-3ec6-4be0-a7c0-25faabfb8456 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.919 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.919 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquired lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.919 188707 DEBUG nova.network.neutron [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.985 188707 DEBUG nova.compute.manager [req-193b6344-60a5-41f3-a7b9-6bfa3589ea7b req-2ffbf694-48f4-42b2-bc63-877a451a95aa 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Received event network-changed-7d447097-3ec6-4be0-a7c0-25faabfb8456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.985 188707 DEBUG nova.compute.manager [req-193b6344-60a5-41f3-a7b9-6bfa3589ea7b req-2ffbf694-48f4-42b2-bc63-877a451a95aa 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Refreshing instance network info cache due to event network-changed-7d447097-3ec6-4be0-a7c0-25faabfb8456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 15:50:45 compute-0 nova_compute[188703]: 2026-02-24 15:50:45.986 188707 DEBUG oslo_concurrency.lockutils [req-193b6344-60a5-41f3-a7b9-6bfa3589ea7b req-2ffbf694-48f4-42b2-bc63-877a451a95aa 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.029 188707 DEBUG nova.network.neutron [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.199 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.834 188707 DEBUG nova.network.neutron [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updating instance_info_cache with network_info: [{"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.859 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Releasing lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.859 188707 DEBUG nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Instance network_info: |[{"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.860 188707 DEBUG oslo_concurrency.lockutils [req-193b6344-60a5-41f3-a7b9-6bfa3589ea7b req-2ffbf694-48f4-42b2-bc63-877a451a95aa 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.860 188707 DEBUG nova.network.neutron [req-193b6344-60a5-41f3-a7b9-6bfa3589ea7b req-2ffbf694-48f4-42b2-bc63-877a451a95aa 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Refreshing network info cache for port 7d447097-3ec6-4be0-a7c0-25faabfb8456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.867 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Start _get_guest_xml network_info=[{"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:44:16Z,direct_url=<?>,disk_format='qcow2',id=de6b8fc8-e0dc-4bbf-943b-e6ac6027af11,min_disk=0,min_ram=0,name='cirros',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:44:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.879 188707 WARNING nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.897 188707 DEBUG nova.virt.libvirt.host [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.898 188707 DEBUG nova.virt.libvirt.host [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.904 188707 DEBUG nova.virt.libvirt.host [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.906 188707 DEBUG nova.virt.libvirt.host [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.906 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.907 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T15:44:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='521ca388-0b2e-40c6-bb06-118d4ed86b49',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:44:16Z,direct_url=<?>,disk_format='qcow2',id=de6b8fc8-e0dc-4bbf-943b-e6ac6027af11,min_disk=0,min_ram=0,name='cirros',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:44:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.908 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.909 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.909 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.910 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.910 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.911 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.912 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.913 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.913 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.914 188707 DEBUG nova.virt.hardware [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.922 188707 DEBUG nova.virt.libvirt.vif [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T15:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla',id=3,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='105127c2-20fd-4471-8609-2ac19fea2fd2'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-7wd0g29f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T15:50:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc5MTk1NDQzOTI5ODM0MDg1Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzkxOTU0NDM5Mjk4MzQwODUzOD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc5MTk1NDQzOTI5ODM0MDg1Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Feb 24 15:50:46 compute-0 nova_compute[188703]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzkxOTU0NDM5Mjk4MzQwODUzOD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc5MTk1NDQzOTI5ODM0MDg1Mzg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0tLQo=',user_id='bd338d866e3242aeb685fec99c451955',uuid=5315fe0d-538a-4ea7-b3fe-92e5a13f1678,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.923 188707 DEBUG nova.network.os_vif_util [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.924 188707 DEBUG nova.network.os_vif_util [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:53:2a,bridge_name='br-int',has_traffic_filtering=True,id=7d447097-3ec6-4be0-a7c0-25faabfb8456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7d447097-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.925 188707 DEBUG nova.objects.instance [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.942 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] End _get_guest_xml xml=<domain type="kvm">
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <uuid>5315fe0d-538a-4ea7-b3fe-92e5a13f1678</uuid>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <name>instance-00000003</name>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <memory>524288</memory>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <metadata>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <nova:name>vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla</nova:name>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 15:50:46</nova:creationTime>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <nova:flavor name="m1.small">
Feb 24 15:50:46 compute-0 nova_compute[188703]:         <nova:memory>512</nova:memory>
Feb 24 15:50:46 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 15:50:46 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 15:50:46 compute-0 nova_compute[188703]:         <nova:ephemeral>1</nova:ephemeral>
Feb 24 15:50:46 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 15:50:46 compute-0 nova_compute[188703]:         <nova:user uuid="bd338d866e3242aeb685fec99c451955">admin</nova:user>
Feb 24 15:50:46 compute-0 nova_compute[188703]:         <nova:project uuid="4407f5b870e145d8917119ad928717e8">admin</nova:project>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="de6b8fc8-e0dc-4bbf-943b-e6ac6027af11"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 15:50:46 compute-0 nova_compute[188703]:         <nova:port uuid="7d447097-3ec6-4be0-a7c0-25faabfb8456">
Feb 24 15:50:46 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="192.168.0.231" ipVersion="4"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   </metadata>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <system>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <entry name="serial">5315fe0d-538a-4ea7-b3fe-92e5a13f1678</entry>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <entry name="uuid">5315fe0d-538a-4ea7-b3fe-92e5a13f1678</entry>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </system>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <os>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   </os>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <features>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <apic/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   </features>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   </clock>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   </cpu>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   <devices>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <target dev="vdb" bus="virtio"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.config"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:b0:53:2a"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <target dev="tap7d447097-3e"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </interface>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/console.log" append="off"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </serial>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <video>
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </video>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </rng>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 15:50:46 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 15:50:46 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 15:50:46 compute-0 nova_compute[188703]:   </devices>
Feb 24 15:50:46 compute-0 nova_compute[188703]: </domain>
Feb 24 15:50:46 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.942 188707 DEBUG nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Preparing to wait for external event network-vif-plugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.942 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.943 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.943 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.944 188707 DEBUG nova.virt.libvirt.vif [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T15:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla',id=3,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='105127c2-20fd-4471-8609-2ac19fea2fd2'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-7wd0g29f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T15:50:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc5MTk1NDQzOTI5ODM0MDg1Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzkxOTU0NDM5Mjk4MzQwODUzOD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc5MTk1NDQzOTI5ODM0MDg1Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Feb 24 15:50:46 compute-0 nova_compute[188703]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzkxOTU0NDM5Mjk4MzQwODUzOD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc5MTk1NDQzOTI5ODM0MDg1Mzg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0tLQo=',user_id='bd338d866e3242aeb685fec99c451955',uuid=5315fe0d-538a-4ea7-b3fe-92e5a13f1678,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.945 188707 DEBUG nova.network.os_vif_util [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.946 188707 DEBUG nova.network.os_vif_util [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:53:2a,bridge_name='br-int',has_traffic_filtering=True,id=7d447097-3ec6-4be0-a7c0-25faabfb8456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7d447097-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.946 188707 DEBUG os_vif [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:53:2a,bridge_name='br-int',has_traffic_filtering=True,id=7d447097-3ec6-4be0-a7c0-25faabfb8456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7d447097-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.947 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.948 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.948 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.952 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.953 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d447097-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.953 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d447097-3e, col_values=(('external_ids', {'iface-id': '7d447097-3ec6-4be0-a7c0-25faabfb8456', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:53:2a', 'vm-uuid': '5315fe0d-538a-4ea7-b3fe-92e5a13f1678'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.955 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:46 compute-0 NetworkManager[56995]: <info>  [1771948246.9572] manager: (tap7d447097-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.958 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.966 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:46 compute-0 nova_compute[188703]: 2026-02-24 15:50:46.967 188707 INFO os_vif [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:53:2a,bridge_name='br-int',has_traffic_filtering=True,id=7d447097-3ec6-4be0-a7c0-25faabfb8456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7d447097-3e')
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.013 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.013 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.014 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.014 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No VIF found with MAC fa:16:3e:b0:53:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.014 188707 INFO nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Using config drive
Feb 24 15:50:47 compute-0 podman[244201]: 2026-02-24 15:50:47.11630842 +0000 UTC m=+0.079496666 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi)
Feb 24 15:50:47 compute-0 rsyslogd[239437]: message too long (8192) with configured size 8096, begin of message is: 2026-02-24 15:50:46.922 188707 DEBUG nova.virt.libvirt.vif [None req-16d6b6ce-4b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 24 15:50:47 compute-0 rsyslogd[239437]: message too long (8192) with configured size 8096, begin of message is: 2026-02-24 15:50:46.944 188707 DEBUG nova.virt.libvirt.vif [None req-16d6b6ce-4b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.273 188707 INFO nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Creating config drive at /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.config
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.281 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpogojalxg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.400 188707 DEBUG oslo_concurrency.processutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpogojalxg" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:47 compute-0 kernel: tap7d447097-3e: entered promiscuous mode
Feb 24 15:50:47 compute-0 ovn_controller[98701]: 2026-02-24T15:50:47Z|00040|binding|INFO|Claiming lport 7d447097-3ec6-4be0-a7c0-25faabfb8456 for this chassis.
Feb 24 15:50:47 compute-0 ovn_controller[98701]: 2026-02-24T15:50:47Z|00041|binding|INFO|7d447097-3ec6-4be0-a7c0-25faabfb8456: Claiming fa:16:3e:b0:53:2a 192.168.0.231
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.483 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:47 compute-0 NetworkManager[56995]: <info>  [1771948247.4859] manager: (tap7d447097-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Feb 24 15:50:47 compute-0 ovn_controller[98701]: 2026-02-24T15:50:47Z|00042|binding|INFO|Setting lport 7d447097-3ec6-4be0-a7c0-25faabfb8456 ovn-installed in OVS
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.493 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.499 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:47 compute-0 systemd-udevd[244240]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:50:47 compute-0 systemd-machined[158049]: New machine qemu-3-instance-00000003.
Feb 24 15:50:47 compute-0 ovn_controller[98701]: 2026-02-24T15:50:47Z|00043|binding|INFO|Setting lport 7d447097-3ec6-4be0-a7c0-25faabfb8456 up in Southbound
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.544 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:53:2a 192.168.0.231'], port_security=['fa:16:3e:b0:53:2a 192.168.0.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-ifzd7ux27mgz-fxjcpj7jbkrv-cstggnamwujf-port-klsqv3gcbm73', 'neutron:cidrs': '192.168.0.231/24', 'neutron:device_id': '5315fe0d-538a-4ea7-b3fe-92e5a13f1678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-863f062e-1672-4c9a-8889-3b2ee95f838a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-ifzd7ux27mgz-fxjcpj7jbkrv-cstggnamwujf-port-klsqv3gcbm73', 'neutron:project_id': '4407f5b870e145d8917119ad928717e8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9038fe38-7d22-46f5-bd37-0cab71bf22d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231de057-8460-4792-a8ff-f638ed53c1a8, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=7d447097-3ec6-4be0-a7c0-25faabfb8456) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.546 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 7d447097-3ec6-4be0-a7c0-25faabfb8456 in datapath 863f062e-1672-4c9a-8889-3b2ee95f838a bound to our chassis
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.549 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 863f062e-1672-4c9a-8889-3b2ee95f838a
Feb 24 15:50:47 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Feb 24 15:50:47 compute-0 NetworkManager[56995]: <info>  [1771948247.5566] device (tap7d447097-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 15:50:47 compute-0 NetworkManager[56995]: <info>  [1771948247.5574] device (tap7d447097-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.570 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3a94ee-59d8-41cc-9e62-978d60eaa2de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.609 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[82c83c38-075f-4e2d-a51c-fe9a3f0f456e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.614 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[ace40231-8f72-4292-87f3-0db8c8405012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.646 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[d936ec2b-6dda-471e-ac2b-4962e2890902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.669 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[51e07a9d-48c5-484b-a264-66d4f16156fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap863f062e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:6f:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365300, 'reachable_time': 38793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244253, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.692 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8d2ed1-c34f-4cd3-890c-4588b18e8505]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365310, 'tstamp': 365310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244255, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365313, 'tstamp': 365313}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244255, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.694 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap863f062e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.696 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.698 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.701 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap863f062e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.702 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.703 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap863f062e-10, col_values=(('external_ids', {'iface-id': 'e7d10e1c-8dfe-4042-832a-f76958f5496a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:50:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:47.704 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.855 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.896 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948247.8963184, 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.897 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] VM Started (Lifecycle Event)
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.930 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.941 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948247.898536, 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.942 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] VM Paused (Lifecycle Event)
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.984 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:50:47 compute-0 nova_compute[188703]: 2026-02-24 15:50:47.992 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.029 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.072 188707 DEBUG nova.compute.manager [req-abfa9ce7-2099-49ce-a6c3-95e7e9ccfd54 req-78cce806-d646-40c5-abcf-b978fc6af339 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Received event network-vif-plugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.072 188707 DEBUG oslo_concurrency.lockutils [req-abfa9ce7-2099-49ce-a6c3-95e7e9ccfd54 req-78cce806-d646-40c5-abcf-b978fc6af339 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.073 188707 DEBUG oslo_concurrency.lockutils [req-abfa9ce7-2099-49ce-a6c3-95e7e9ccfd54 req-78cce806-d646-40c5-abcf-b978fc6af339 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.073 188707 DEBUG oslo_concurrency.lockutils [req-abfa9ce7-2099-49ce-a6c3-95e7e9ccfd54 req-78cce806-d646-40c5-abcf-b978fc6af339 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.073 188707 DEBUG nova.compute.manager [req-abfa9ce7-2099-49ce-a6c3-95e7e9ccfd54 req-78cce806-d646-40c5-abcf-b978fc6af339 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Processing event network-vif-plugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.074 188707 DEBUG nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.079 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948248.0792887, 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.080 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] VM Resumed (Lifecycle Event)
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.081 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.088 188707 INFO nova.virt.libvirt.driver [-] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Instance spawned successfully.
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.088 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.096 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.109 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.118 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.119 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.120 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.121 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.122 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.123 188707 DEBUG nova.virt.libvirt.driver [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.131 188707 DEBUG nova.network.neutron [req-193b6344-60a5-41f3-a7b9-6bfa3589ea7b req-2ffbf694-48f4-42b2-bc63-877a451a95aa 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updated VIF entry in instance network info cache for port 7d447097-3ec6-4be0-a7c0-25faabfb8456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.132 188707 DEBUG nova.network.neutron [req-193b6344-60a5-41f3-a7b9-6bfa3589ea7b req-2ffbf694-48f4-42b2-bc63-877a451a95aa 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updating instance_info_cache with network_info: [{"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.136 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.154 188707 DEBUG oslo_concurrency.lockutils [req-193b6344-60a5-41f3-a7b9-6bfa3589ea7b req-2ffbf694-48f4-42b2-bc63-877a451a95aa 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.177 188707 INFO nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Took 3.38 seconds to spawn the instance on the hypervisor.
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.177 188707 DEBUG nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.236 188707 INFO nova.compute.manager [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Took 3.94 seconds to build instance.
Feb 24 15:50:48 compute-0 nova_compute[188703]: 2026-02-24 15:50:48.257 188707 DEBUG oslo_concurrency.lockutils [None req-16d6b6ce-4b02-4f41-a19f-af062b648788 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:48 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 24 15:50:48 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 24 15:50:48 compute-0 podman[244263]: 2026-02-24 15:50:48.418471742 +0000 UTC m=+0.104567015 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64)
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.180 188707 DEBUG nova.compute.manager [req-8eda9e2f-60d5-4c69-ac70-3278ebbf56c7 req-e6e50311-31fe-4378-8d52-f82ab35d0700 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Received event network-vif-plugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.181 188707 DEBUG oslo_concurrency.lockutils [req-8eda9e2f-60d5-4c69-ac70-3278ebbf56c7 req-e6e50311-31fe-4378-8d52-f82ab35d0700 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.182 188707 DEBUG oslo_concurrency.lockutils [req-8eda9e2f-60d5-4c69-ac70-3278ebbf56c7 req-e6e50311-31fe-4378-8d52-f82ab35d0700 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.182 188707 DEBUG oslo_concurrency.lockutils [req-8eda9e2f-60d5-4c69-ac70-3278ebbf56c7 req-e6e50311-31fe-4378-8d52-f82ab35d0700 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.183 188707 DEBUG nova.compute.manager [req-8eda9e2f-60d5-4c69-ac70-3278ebbf56c7 req-e6e50311-31fe-4378-8d52-f82ab35d0700 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] No waiting events found dispatching network-vif-plugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.184 188707 WARNING nova.compute.manager [req-8eda9e2f-60d5-4c69-ac70-3278ebbf56c7 req-e6e50311-31fe-4378-8d52-f82ab35d0700 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Received unexpected event network-vif-plugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 for instance with vm_state active and task_state None.
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.484 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.485 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.677 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.678 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:50:50 compute-0 nova_compute[188703]: 2026-02-24 15:50:50.679 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:50:51 compute-0 nova_compute[188703]: 2026-02-24 15:50:51.957 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.698 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updating instance_info_cache with network_info: [{"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.730 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.731 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.731 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.731 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.731 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.732 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.732 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.732 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:50:52 compute-0 podman[244303]: 2026-02-24 15:50:52.792703702 +0000 UTC m=+0.063245978 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.7, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1770267347, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9/ubi-minimal)
Feb 24 15:50:52 compute-0 nova_compute[188703]: 2026-02-24 15:50:52.857 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:54 compute-0 nova_compute[188703]: 2026-02-24 15:50:54.185 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:54 compute-0 nova_compute[188703]: 2026-02-24 15:50:54.185 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:54 compute-0 nova_compute[188703]: 2026-02-24 15:50:54.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:55.708 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:55.709 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:50:55.709 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:56 compute-0 nova_compute[188703]: 2026-02-24 15:50:56.962 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:57 compute-0 nova_compute[188703]: 2026-02-24 15:50:57.863 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:50:57 compute-0 nova_compute[188703]: 2026-02-24 15:50:57.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:50:57 compute-0 nova_compute[188703]: 2026-02-24 15:50:57.987 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:57 compute-0 nova_compute[188703]: 2026-02-24 15:50:57.988 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:57 compute-0 nova_compute[188703]: 2026-02-24 15:50:57.989 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:50:57 compute-0 nova_compute[188703]: 2026-02-24 15:50:57.990 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:50:58 compute-0 podman[244321]: 2026-02-24 15:50:58.148938077 +0000 UTC m=+0.108052780 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 15:50:58 compute-0 podman[244322]: 2026-02-24 15:50:58.232636907 +0000 UTC m=+0.182891376 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.383 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.450 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.452 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.534 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.536 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.593 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.595 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.641 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.649 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.708 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.710 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.758 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.759 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.808 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.809 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.883 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.889 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.937 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:58 compute-0 nova_compute[188703]: 2026-02-24 15:50:58.938 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.022 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.023 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.081 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.083 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.143 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.541 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.542 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4947MB free_disk=72.2164535522461GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.543 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.543 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.652 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.653 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.653 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.653 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.654 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.673 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.691 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.691 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.708 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.728 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 15:50:59 compute-0 podman[204685]: time="2026-02-24T15:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:50:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:50:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.838 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.861 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.889 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:50:59 compute-0 nova_compute[188703]: 2026-02-24 15:50:59.889 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:51:01 compute-0 openstack_network_exporter[207830]: ERROR   15:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:51:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:51:01 compute-0 openstack_network_exporter[207830]: ERROR   15:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:51:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:51:01 compute-0 nova_compute[188703]: 2026-02-24 15:51:01.965 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:02 compute-0 anacron[30515]: Job `cron.weekly' started
Feb 24 15:51:02 compute-0 anacron[30515]: Job `cron.weekly' terminated
Feb 24 15:51:02 compute-0 nova_compute[188703]: 2026-02-24 15:51:02.861 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:05 compute-0 podman[244403]: 2026-02-24 15:51:05.126703081 +0000 UTC m=+0.084296597 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:51:06 compute-0 nova_compute[188703]: 2026-02-24 15:51:06.971 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:07 compute-0 nova_compute[188703]: 2026-02-24 15:51:07.865 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:11 compute-0 nova_compute[188703]: 2026-02-24 15:51:11.975 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:12 compute-0 nova_compute[188703]: 2026-02-24 15:51:12.865 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:16 compute-0 podman[244425]: 2026-02-24 15:51:16.157588551 +0000 UTC m=+0.104039840 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 15:51:16 compute-0 podman[244426]: 2026-02-24 15:51:16.167836383 +0000 UTC m=+0.107455514 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 24 15:51:16 compute-0 nova_compute[188703]: 2026-02-24 15:51:16.980 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:17 compute-0 ovn_controller[98701]: 2026-02-24T15:51:17Z|00044|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Feb 24 15:51:17 compute-0 nova_compute[188703]: 2026-02-24 15:51:17.869 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:18 compute-0 podman[244464]: 2026-02-24 15:51:18.13394643 +0000 UTC m=+0.087140945 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:51:19 compute-0 podman[244486]: 2026-02-24 15:51:19.146854823 +0000 UTC m=+0.108909543 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_id=kepler, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=)
Feb 24 15:51:19 compute-0 ovn_controller[98701]: 2026-02-24T15:51:19Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:53:2a 192.168.0.231
Feb 24 15:51:19 compute-0 ovn_controller[98701]: 2026-02-24T15:51:19Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:53:2a 192.168.0.231
Feb 24 15:51:21 compute-0 nova_compute[188703]: 2026-02-24 15:51:21.983 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:22 compute-0 nova_compute[188703]: 2026-02-24 15:51:22.872 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:23 compute-0 podman[244519]: 2026-02-24 15:51:23.134377777 +0000 UTC m=+0.090160328 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, version=9.7)
Feb 24 15:51:26 compute-0 nova_compute[188703]: 2026-02-24 15:51:26.986 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:27 compute-0 nova_compute[188703]: 2026-02-24 15:51:27.875 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:29 compute-0 podman[244538]: 2026-02-24 15:51:29.156649275 +0000 UTC m=+0.112836162 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 24 15:51:29 compute-0 podman[244539]: 2026-02-24 15:51:29.190953888 +0000 UTC m=+0.140055630 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:51:29 compute-0 podman[204685]: time="2026-02-24T15:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:51:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:51:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Feb 24 15:51:30 compute-0 sshd-session[244582]: Connection closed by authenticating user root 172.214.45.193 port 24584 [preauth]
Feb 24 15:51:31 compute-0 openstack_network_exporter[207830]: ERROR   15:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:51:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:51:31 compute-0 openstack_network_exporter[207830]: ERROR   15:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:51:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:51:31 compute-0 nova_compute[188703]: 2026-02-24 15:51:31.990 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:32 compute-0 nova_compute[188703]: 2026-02-24 15:51:32.878 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:36 compute-0 podman[244584]: 2026-02-24 15:51:36.142351367 +0000 UTC m=+0.104138303 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:51:36 compute-0 nova_compute[188703]: 2026-02-24 15:51:36.994 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:37 compute-0 nova_compute[188703]: 2026-02-24 15:51:37.885 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.829 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.830 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.830 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.843 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'name': 'test_0', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.848 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 15:51:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:39.852 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/5315fe0d-538a-4ea7-b3fe-92e5a13f1678 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.856 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 24 Feb 2026 15:51:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-074c4679-4a42-46fd-add1-9d75bce1dd20 x-openstack-request-id: req-074c4679-4a42-46fd-add1-9d75bce1dd20 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.856 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "5315fe0d-538a-4ea7-b3fe-92e5a13f1678", "name": "vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla", "status": "ACTIVE", "tenant_id": "4407f5b870e145d8917119ad928717e8", "user_id": "bd338d866e3242aeb685fec99c451955", "metadata": {"metering.server_group": "105127c2-20fd-4471-8609-2ac19fea2fd2"}, "hostId": "781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62", "image": {"id": "de6b8fc8-e0dc-4bbf-943b-e6ac6027af11", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/de6b8fc8-e0dc-4bbf-943b-e6ac6027af11"}]}, "flavor": {"id": "521ca388-0b2e-40c6-bb06-118d4ed86b49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/521ca388-0b2e-40c6-bb06-118d4ed86b49"}]}, "created": "2026-02-24T15:50:41Z", "updated": "2026-02-24T15:50:48Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.231", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b0:53:2a"}, {"version": 4, "addr": "192.168.122.198", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b0:53:2a"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/5315fe0d-538a-4ea7-b3fe-92e5a13f1678"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/5315fe0d-538a-4ea7-b3fe-92e5a13f1678"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-24T15:50:48.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.856 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/5315fe0d-538a-4ea7-b3fe-92e5a13f1678 used request id req-074c4679-4a42-46fd-add1-9d75bce1dd20 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.857 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5315fe0d-538a-4ea7-b3fe-92e5a13f1678', 'name': 'vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.860 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4e6fb5f9-248e-440a-9cd9-472a05ab19ee', 'name': 'vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.860 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T15:51:40.861011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.881 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.907 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/memory.usage volume: 49.66015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.939 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/memory.usage volume: 49.05859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T15:51:40.940652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.965 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.965 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.965 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.992 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.993 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:40.993 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.020 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.021 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.021 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T15:51:41.022834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.028 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.032 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 / tap7d447097-3e inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.033 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.036 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.037 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.037 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.037 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T15:51:41.037384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.038 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.038 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.038 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.039 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T15:51:41.038757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.039 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.040 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T15:51:41.040491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.040 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.041 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.041 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T15:51:41.042301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.042 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.042 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.043 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.043 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.043 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.043 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.044 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.044 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.044 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.045 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T15:51:41.045829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.119 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.119 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.121 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.206 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.206 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.207 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.287 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.287 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.288 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.289 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.289 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.290 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.291 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.291 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.293 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T15:51:41.290484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.294 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 691853245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.294 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124156741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.294 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124375245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.295 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 811206452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.295 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 179818558 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.296 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 156094626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.296 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 791899769 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.297 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 157289336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T15:51:41.293783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.298 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 260856202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.299 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.299 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/cpu volume: 36670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T15:51:41.299719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.300 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/cpu volume: 31440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.301 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/cpu volume: 239920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.302 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.302 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T15:51:41.302662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.303 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.303 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.304 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.304 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.305 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.305 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.306 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.306 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.308 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.308 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.309 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.309 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.310 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.310 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T15:51:41.308729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.310 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T15:51:41.310428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.311 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.312 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.312 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.312 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.313 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.313 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.313 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.313 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.314 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.314 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T15:51:41.312047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.315 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 2170641399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.315 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 13738713 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.316 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.316 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 2287672068 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.316 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 15976249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.317 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.317 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 2717006851 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.317 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 18006907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.318 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.319 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.319 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.319 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T15:51:41.315586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T15:51:41.319309) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.320 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.320 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.320 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.320 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.320 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.321 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.321 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.322 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.322 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.322 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla>]
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.323 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.323 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.323 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.323 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.323 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.323 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.324 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.324 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.324 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.325 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.325 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.325 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.326 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.326 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.326 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.327 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.327 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.328 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.328 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.328 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.329 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-24T15:51:41.322412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T15:51:41.323350) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T15:51:41.326617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T15:51:41.328195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.332 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T15:51:41.329979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.332 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla>]
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T15:51:41.331146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.333 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-24T15:51:41.332394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.333 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.333 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.334 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.335 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T15:51:41.333436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.335 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.bytes volume: 1991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T15:51:41.334997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.335 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes volume: 4694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:51:41.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:51:41 compute-0 nova_compute[188703]: 2026-02-24 15:51:41.997 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:42 compute-0 nova_compute[188703]: 2026-02-24 15:51:42.888 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:47 compute-0 nova_compute[188703]: 2026-02-24 15:51:47.000 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:47 compute-0 podman[244611]: 2026-02-24 15:51:47.092799815 +0000 UTC m=+0.053115391 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 24 15:51:47 compute-0 podman[244610]: 2026-02-24 15:51:47.117667638 +0000 UTC m=+0.081196203 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 15:51:47 compute-0 nova_compute[188703]: 2026-02-24 15:51:47.890 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:48 compute-0 nova_compute[188703]: 2026-02-24 15:51:48.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:48 compute-0 nova_compute[188703]: 2026-02-24 15:51:48.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:48 compute-0 nova_compute[188703]: 2026-02-24 15:51:48.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 15:51:49 compute-0 podman[244649]: 2026-02-24 15:51:49.12653949 +0000 UTC m=+0.091667819 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 15:51:50 compute-0 podman[244668]: 2026-02-24 15:51:50.143924947 +0000 UTC m=+0.101758187 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, config_id=kepler, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, managed_by=edpm_ansible)
Feb 24 15:51:50 compute-0 nova_compute[188703]: 2026-02-24 15:51:50.189 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:50 compute-0 nova_compute[188703]: 2026-02-24 15:51:50.190 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:50 compute-0 nova_compute[188703]: 2026-02-24 15:51:50.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:50 compute-0 nova_compute[188703]: 2026-02-24 15:51:50.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:51:51 compute-0 nova_compute[188703]: 2026-02-24 15:51:51.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:51 compute-0 nova_compute[188703]: 2026-02-24 15:51:51.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:51:51 compute-0 nova_compute[188703]: 2026-02-24 15:51:51.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:51:52 compute-0 nova_compute[188703]: 2026-02-24 15:51:52.004 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:52 compute-0 nova_compute[188703]: 2026-02-24 15:51:52.664 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:51:52 compute-0 nova_compute[188703]: 2026-02-24 15:51:52.664 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:51:52 compute-0 nova_compute[188703]: 2026-02-24 15:51:52.665 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:51:52 compute-0 nova_compute[188703]: 2026-02-24 15:51:52.665 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:51:52 compute-0 nova_compute[188703]: 2026-02-24 15:51:52.892 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:54 compute-0 nova_compute[188703]: 2026-02-24 15:51:54.097 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:51:54 compute-0 nova_compute[188703]: 2026-02-24 15:51:54.118 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:51:54 compute-0 nova_compute[188703]: 2026-02-24 15:51:54.119 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:51:54 compute-0 nova_compute[188703]: 2026-02-24 15:51:54.120 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:54 compute-0 nova_compute[188703]: 2026-02-24 15:51:54.121 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:54 compute-0 nova_compute[188703]: 2026-02-24 15:51:54.122 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 15:51:54 compute-0 nova_compute[188703]: 2026-02-24 15:51:54.140 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 15:51:54 compute-0 podman[244689]: 2026-02-24 15:51:54.140760597 +0000 UTC m=+0.089804069 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, config_id=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 24 15:51:54 compute-0 nova_compute[188703]: 2026-02-24 15:51:54.141 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:51:55.710 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:51:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:51:55.713 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:51:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:51:55.714 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:51:55 compute-0 nova_compute[188703]: 2026-02-24 15:51:55.988 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:55 compute-0 nova_compute[188703]: 2026-02-24 15:51:55.989 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:57 compute-0 nova_compute[188703]: 2026-02-24 15:51:57.007 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:57 compute-0 nova_compute[188703]: 2026-02-24 15:51:57.896 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:51:58 compute-0 nova_compute[188703]: 2026-02-24 15:51:58.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:51:58 compute-0 nova_compute[188703]: 2026-02-24 15:51:58.972 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:51:58 compute-0 nova_compute[188703]: 2026-02-24 15:51:58.972 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:51:58 compute-0 nova_compute[188703]: 2026-02-24 15:51:58.973 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:51:58 compute-0 nova_compute[188703]: 2026-02-24 15:51:58.973 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.071 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.176 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.177 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.249 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.250 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.326 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.328 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.401 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.412 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.481 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.482 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.548 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.549 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.624 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.625 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.685 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.694 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 podman[204685]: time="2026-02-24T15:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:51:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:51:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.771 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.772 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.826 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.830 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.892 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.893 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:51:59 compute-0 nova_compute[188703]: 2026-02-24 15:51:59.950 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:00 compute-0 podman[244744]: 2026-02-24 15:52:00.145543164 +0000 UTC m=+0.096966745 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute)
Feb 24 15:52:00 compute-0 podman[244745]: 2026-02-24 15:52:00.195792355 +0000 UTC m=+0.134958019 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.398 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.400 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4815MB free_disk=72.19545364379883GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.401 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.402 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.591 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.592 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.593 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.594 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.594 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.879 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.899 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.901 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:52:00 compute-0 nova_compute[188703]: 2026-02-24 15:52:00.902 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:01 compute-0 openstack_network_exporter[207830]: ERROR   15:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:52:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:52:01 compute-0 openstack_network_exporter[207830]: ERROR   15:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:52:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:52:02 compute-0 nova_compute[188703]: 2026-02-24 15:52:02.009 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:02 compute-0 nova_compute[188703]: 2026-02-24 15:52:02.909 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:07 compute-0 nova_compute[188703]: 2026-02-24 15:52:07.013 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:07 compute-0 podman[244788]: 2026-02-24 15:52:07.170742622 +0000 UTC m=+0.113484121 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:52:07 compute-0 nova_compute[188703]: 2026-02-24 15:52:07.909 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:12 compute-0 nova_compute[188703]: 2026-02-24 15:52:12.017 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:12 compute-0 nova_compute[188703]: 2026-02-24 15:52:12.914 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:17 compute-0 nova_compute[188703]: 2026-02-24 15:52:17.021 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:17 compute-0 nova_compute[188703]: 2026-02-24 15:52:17.916 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:18 compute-0 podman[244812]: 2026-02-24 15:52:18.136906902 +0000 UTC m=+0.087933927 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:52:18 compute-0 podman[244813]: 2026-02-24 15:52:18.175997476 +0000 UTC m=+0.112384980 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible)
Feb 24 15:52:20 compute-0 podman[244855]: 2026-02-24 15:52:20.116429717 +0000 UTC m=+0.069354566 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Feb 24 15:52:21 compute-0 podman[244874]: 2026-02-24 15:52:21.133853896 +0000 UTC m=+0.088469182 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, config_id=kepler)
Feb 24 15:52:22 compute-0 nova_compute[188703]: 2026-02-24 15:52:22.025 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:22 compute-0 nova_compute[188703]: 2026-02-24 15:52:22.918 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:25 compute-0 podman[244894]: 2026-02-24 15:52:25.144144005 +0000 UTC m=+0.091226587 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z)
Feb 24 15:52:27 compute-0 nova_compute[188703]: 2026-02-24 15:52:27.030 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:27 compute-0 nova_compute[188703]: 2026-02-24 15:52:27.921 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:29 compute-0 podman[204685]: time="2026-02-24T15:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:52:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:52:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Feb 24 15:52:31 compute-0 podman[244917]: 2026-02-24 15:52:31.134968299 +0000 UTC m=+0.091624019 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute)
Feb 24 15:52:31 compute-0 podman[244918]: 2026-02-24 15:52:31.17067712 +0000 UTC m=+0.125675984 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 24 15:52:31 compute-0 openstack_network_exporter[207830]: ERROR   15:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:52:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:52:31 compute-0 openstack_network_exporter[207830]: ERROR   15:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:52:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:52:32 compute-0 nova_compute[188703]: 2026-02-24 15:52:32.033 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:32 compute-0 nova_compute[188703]: 2026-02-24 15:52:32.924 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:35 compute-0 nova_compute[188703]: 2026-02-24 15:52:35.884 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:35.885 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:52:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:35.887 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 15:52:37 compute-0 nova_compute[188703]: 2026-02-24 15:52:37.039 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:37 compute-0 nova_compute[188703]: 2026-02-24 15:52:37.929 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:38 compute-0 podman[244960]: 2026-02-24 15:52:38.153022449 +0000 UTC m=+0.102651141 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.464 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.467 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.488 188707 DEBUG nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.575 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.576 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.587 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.588 188707 INFO nova.compute.claims [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Claim successful on node compute-0.ctlplane.example.com
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.805 188707 DEBUG nova.compute.provider_tree [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.821 188707 DEBUG nova.scheduler.client.report [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.853 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.854 188707 DEBUG nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.898 188707 DEBUG nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.900 188707 DEBUG nova.network.neutron [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.924 188707 INFO nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 15:52:40 compute-0 nova_compute[188703]: 2026-02-24 15:52:40.973 188707 DEBUG nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.113 188707 DEBUG nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.120 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.121 188707 INFO nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Creating image(s)
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.122 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.123 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.124 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.137 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.190 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.192 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.194 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.213 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.270 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.272 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759,backing_fmt=raw /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.313 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759,backing_fmt=raw /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.315 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0be823aa24a489a3a4f58a9a60afb2758db2759" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.316 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.370 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.384 188707 DEBUG nova.virt.disk.api [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Checking if we can resize image /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.386 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.451 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.453 188707 DEBUG nova.virt.disk.api [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Cannot resize image /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.454 188707 DEBUG nova.objects.instance [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'migration_context' on Instance uuid 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.473 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.475 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.477 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.499 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.569 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.571 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.572 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.591 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.659 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.678 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.719 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.721 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.722 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.777 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.779 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.780 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Ensure instance console log exists: /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.782 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.783 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:41 compute-0 nova_compute[188703]: 2026-02-24 15:52:41.784 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:41.892 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:52:42 compute-0 nova_compute[188703]: 2026-02-24 15:52:42.046 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:42 compute-0 nova_compute[188703]: 2026-02-24 15:52:42.931 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:44 compute-0 nova_compute[188703]: 2026-02-24 15:52:44.856 188707 DEBUG nova.network.neutron [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Successfully updated port: 34a110b8-bd03-4b38-8f53-7380a2e1fc82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 15:52:44 compute-0 nova_compute[188703]: 2026-02-24 15:52:44.888 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:52:44 compute-0 nova_compute[188703]: 2026-02-24 15:52:44.889 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquired lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:52:44 compute-0 nova_compute[188703]: 2026-02-24 15:52:44.890 188707 DEBUG nova.network.neutron [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 15:52:45 compute-0 nova_compute[188703]: 2026-02-24 15:52:45.027 188707 DEBUG nova.compute.manager [req-8fd4acc4-e7aa-4771-94fe-6d45ebd70df5 req-6144e84a-21f1-42bd-b069-b098ae168d97 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Received event network-changed-34a110b8-bd03-4b38-8f53-7380a2e1fc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:52:45 compute-0 nova_compute[188703]: 2026-02-24 15:52:45.027 188707 DEBUG nova.compute.manager [req-8fd4acc4-e7aa-4771-94fe-6d45ebd70df5 req-6144e84a-21f1-42bd-b069-b098ae168d97 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Refreshing instance network info cache due to event network-changed-34a110b8-bd03-4b38-8f53-7380a2e1fc82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 15:52:45 compute-0 nova_compute[188703]: 2026-02-24 15:52:45.028 188707 DEBUG oslo_concurrency.lockutils [req-8fd4acc4-e7aa-4771-94fe-6d45ebd70df5 req-6144e84a-21f1-42bd-b069-b098ae168d97 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:52:45 compute-0 nova_compute[188703]: 2026-02-24 15:52:45.716 188707 DEBUG nova.network.neutron [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.051 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.741 188707 DEBUG nova.network.neutron [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updating instance_info_cache with network_info: [{"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.852 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Releasing lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.853 188707 DEBUG nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Instance network_info: |[{"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.854 188707 DEBUG oslo_concurrency.lockutils [req-8fd4acc4-e7aa-4771-94fe-6d45ebd70df5 req-6144e84a-21f1-42bd-b069-b098ae168d97 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.855 188707 DEBUG nova.network.neutron [req-8fd4acc4-e7aa-4771-94fe-6d45ebd70df5 req-6144e84a-21f1-42bd-b069-b098ae168d97 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Refreshing network info cache for port 34a110b8-bd03-4b38-8f53-7380a2e1fc82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.858 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Start _get_guest_xml network_info=[{"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:44:16Z,direct_url=<?>,disk_format='qcow2',id=de6b8fc8-e0dc-4bbf-943b-e6ac6027af11,min_disk=0,min_ram=0,name='cirros',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:44:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.867 188707 WARNING nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.874 188707 DEBUG nova.virt.libvirt.host [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.875 188707 DEBUG nova.virt.libvirt.host [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.879 188707 DEBUG nova.virt.libvirt.host [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.880 188707 DEBUG nova.virt.libvirt.host [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.880 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.881 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T15:44:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='521ca388-0b2e-40c6-bb06-118d4ed86b49',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:44:16Z,direct_url=<?>,disk_format='qcow2',id=de6b8fc8-e0dc-4bbf-943b-e6ac6027af11,min_disk=0,min_ram=0,name='cirros',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:44:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.882 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.882 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.882 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.883 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.883 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.884 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.884 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.884 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.885 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.885 188707 DEBUG nova.virt.hardware [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.892 188707 DEBUG nova.virt.libvirt.vif [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T15:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh',id=4,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='105127c2-20fd-4471-8609-2ac19fea2fd2'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-c6i55v07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T15:52:41Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI4NDY4NDA3NjIzMzEyMDQ3Nzg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjg0Njg0MDc2MjMzMTIwNDc3OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI4NDY4NDA3NjIzMzEyMDQ3Nzg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Feb 24 15:52:47 compute-0 nova_compute[188703]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjg0Njg0MDc2MjMzMTIwNDc3OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI4NDY4NDA3NjIzMzEyMDQ3Nzg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0tLQo=',user_id='bd338d866e3242aeb685fec99c451955',uuid=2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.892 188707 DEBUG nova.network.os_vif_util [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.893 188707 DEBUG nova.network.os_vif_util [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:29:21,bridge_name='br-int',has_traffic_filtering=True,id=34a110b8-bd03-4b38-8f53-7380a2e1fc82,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap34a110b8-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.894 188707 DEBUG nova.objects.instance [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.935 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.992 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] End _get_guest_xml xml=<domain type="kvm">
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <uuid>2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354</uuid>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <name>instance-00000004</name>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <memory>524288</memory>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <metadata>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <nova:name>vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh</nova:name>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 15:52:47</nova:creationTime>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <nova:flavor name="m1.small">
Feb 24 15:52:47 compute-0 nova_compute[188703]:         <nova:memory>512</nova:memory>
Feb 24 15:52:47 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 15:52:47 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 15:52:47 compute-0 nova_compute[188703]:         <nova:ephemeral>1</nova:ephemeral>
Feb 24 15:52:47 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 15:52:47 compute-0 nova_compute[188703]:         <nova:user uuid="bd338d866e3242aeb685fec99c451955">admin</nova:user>
Feb 24 15:52:47 compute-0 nova_compute[188703]:         <nova:project uuid="4407f5b870e145d8917119ad928717e8">admin</nova:project>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="de6b8fc8-e0dc-4bbf-943b-e6ac6027af11"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 15:52:47 compute-0 nova_compute[188703]:         <nova:port uuid="34a110b8-bd03-4b38-8f53-7380a2e1fc82">
Feb 24 15:52:47 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="192.168.0.42" ipVersion="4"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   </metadata>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <system>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <entry name="serial">2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354</entry>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <entry name="uuid">2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354</entry>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </system>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <os>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   </os>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <features>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <apic/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   </features>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   </clock>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   </cpu>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   <devices>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <target dev="vdb" bus="virtio"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.config"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:57:29:21"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <target dev="tap34a110b8-bd"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </interface>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/console.log" append="off"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </serial>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <video>
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </video>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </rng>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 15:52:47 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 15:52:47 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 15:52:47 compute-0 nova_compute[188703]:   </devices>
Feb 24 15:52:47 compute-0 nova_compute[188703]: </domain>
Feb 24 15:52:47 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.994 188707 DEBUG nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Preparing to wait for external event network-vif-plugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.995 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.996 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.997 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:47 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.999 188707 DEBUG nova.virt.libvirt.vif [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T15:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh',id=4,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='105127c2-20fd-4471-8609-2ac19fea2fd2'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-c6i55v07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T15:52:41Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI4NDY4NDA3NjIzMzEyMDQ3Nzg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjg0Njg0MDc2MjMzMTIwNDc3OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI4NDY4NDA3NjIzMzEyMDQ3Nzg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Feb 24 15:52:47 compute-0 nova_compute[188703]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjg0Njg0MDc2MjMzMTIwNDc3OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI4NDY4NDA3NjIzMzEyMDQ3Nzg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0tLQo=',user_id='bd338d866e3242aeb685fec99c451955',uuid=2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:47.999 188707 DEBUG nova.network.os_vif_util [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.001 188707 DEBUG nova.network.os_vif_util [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:29:21,bridge_name='br-int',has_traffic_filtering=True,id=34a110b8-bd03-4b38-8f53-7380a2e1fc82,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap34a110b8-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.002 188707 DEBUG os_vif [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:29:21,bridge_name='br-int',has_traffic_filtering=True,id=34a110b8-bd03-4b38-8f53-7380a2e1fc82,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap34a110b8-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.003 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.005 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.008 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.014 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.016 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34a110b8-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.017 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34a110b8-bd, col_values=(('external_ids', {'iface-id': '34a110b8-bd03-4b38-8f53-7380a2e1fc82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:29:21', 'vm-uuid': '2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:52:48 compute-0 rsyslogd[239437]: message too long (8192) with configured size 8096, begin of message is: 2026-02-24 15:52:47.892 188707 DEBUG nova.virt.libvirt.vif [None req-2f9ee75d-1f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.021 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:48 compute-0 NetworkManager[56995]: <info>  [1771948368.0229] manager: (tap34a110b8-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.026 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.028 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.029 188707 INFO os_vif [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:29:21,bridge_name='br-int',has_traffic_filtering=True,id=34a110b8-bd03-4b38-8f53-7380a2e1fc82,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap34a110b8-bd')
Feb 24 15:52:48 compute-0 rsyslogd[239437]: message too long (8192) with configured size 8096, begin of message is: 2026-02-24 15:52:47.999 188707 DEBUG nova.virt.libvirt.vif [None req-2f9ee75d-1f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.179 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.179 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.179 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.179 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No VIF found with MAC fa:16:3e:57:29:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.180 188707 INFO nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Using config drive
Feb 24 15:52:48 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 24 15:52:48 compute-0 podman[245014]: 2026-02-24 15:52:48.601325092 +0000 UTC m=+0.106839258 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:52:48 compute-0 podman[245015]: 2026-02-24 15:52:48.605309841 +0000 UTC m=+0.109272043 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.858 188707 INFO nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Creating config drive at /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.config
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.866 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7ga__22u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:48 compute-0 nova_compute[188703]: 2026-02-24 15:52:48.991 188707 DEBUG oslo_concurrency.processutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7ga__22u" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:49 compute-0 kernel: tap34a110b8-bd: entered promiscuous mode
Feb 24 15:52:49 compute-0 NetworkManager[56995]: <info>  [1771948369.0678] manager: (tap34a110b8-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.070 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:49 compute-0 ovn_controller[98701]: 2026-02-24T15:52:49Z|00045|binding|INFO|Claiming lport 34a110b8-bd03-4b38-8f53-7380a2e1fc82 for this chassis.
Feb 24 15:52:49 compute-0 ovn_controller[98701]: 2026-02-24T15:52:49Z|00046|binding|INFO|34a110b8-bd03-4b38-8f53-7380a2e1fc82: Claiming fa:16:3e:57:29:21 192.168.0.42
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.075 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.080 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:29:21 192.168.0.42'], port_security=['fa:16:3e:57:29:21 192.168.0.42'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-ifzd7ux27mgz-k2kyvgwezk52-kclaz4xd52sx-port-bw4jnbdw5py4', 'neutron:cidrs': '192.168.0.42/24', 'neutron:device_id': '2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-863f062e-1672-4c9a-8889-3b2ee95f838a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-ifzd7ux27mgz-k2kyvgwezk52-kclaz4xd52sx-port-bw4jnbdw5py4', 'neutron:project_id': '4407f5b870e145d8917119ad928717e8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9038fe38-7d22-46f5-bd37-0cab71bf22d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231de057-8460-4792-a8ff-f638ed53c1a8, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=34a110b8-bd03-4b38-8f53-7380a2e1fc82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.081 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 34a110b8-bd03-4b38-8f53-7380a2e1fc82 in datapath 863f062e-1672-4c9a-8889-3b2ee95f838a bound to our chassis
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.083 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 863f062e-1672-4c9a-8889-3b2ee95f838a
Feb 24 15:52:49 compute-0 ovn_controller[98701]: 2026-02-24T15:52:49Z|00047|binding|INFO|Setting lport 34a110b8-bd03-4b38-8f53-7380a2e1fc82 ovn-installed in OVS
Feb 24 15:52:49 compute-0 ovn_controller[98701]: 2026-02-24T15:52:49Z|00048|binding|INFO|Setting lport 34a110b8-bd03-4b38-8f53-7380a2e1fc82 up in Southbound
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.087 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.101 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2bbc12-8fdf-41d5-b1a5-57d2a66cbea0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:52:49 compute-0 systemd-machined[158049]: New machine qemu-4-instance-00000004.
Feb 24 15:52:49 compute-0 systemd-udevd[245080]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 15:52:49 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Feb 24 15:52:49 compute-0 NetworkManager[56995]: <info>  [1771948369.1355] device (tap34a110b8-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 15:52:49 compute-0 NetworkManager[56995]: <info>  [1771948369.1410] device (tap34a110b8-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.138 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[db49375b-340e-4388-9cc0-8a84b5f3aa40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.144 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[e8657945-f3d0-4a05-8f43-8edcb801fe6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.172 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[1baa1bc7-faaf-41be-97b4-3411bdec240e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.194 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[54dab834-1ca0-4af5-8c1c-4f804fd52cea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap863f062e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:6f:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365300, 'reachable_time': 33062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245091, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.214 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[10c46bc5-631f-4a4d-88fc-c4bdf3d64347]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365310, 'tstamp': 365310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245093, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365313, 'tstamp': 365313}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245093, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.216 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap863f062e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.217 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.219 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.220 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap863f062e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.221 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.222 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap863f062e-10, col_values=(('external_ids', {'iface-id': 'e7d10e1c-8dfe-4042-832a-f76958f5496a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:52:49 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:49.223 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.381 188707 DEBUG nova.network.neutron [req-8fd4acc4-e7aa-4771-94fe-6d45ebd70df5 req-6144e84a-21f1-42bd-b069-b098ae168d97 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updated VIF entry in instance network info cache for port 34a110b8-bd03-4b38-8f53-7380a2e1fc82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.383 188707 DEBUG nova.network.neutron [req-8fd4acc4-e7aa-4771-94fe-6d45ebd70df5 req-6144e84a-21f1-42bd-b069-b098ae168d97 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updating instance_info_cache with network_info: [{"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.466 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948369.4658773, 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.468 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] VM Started (Lifecycle Event)
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.475 188707 DEBUG oslo_concurrency.lockutils [req-8fd4acc4-e7aa-4771-94fe-6d45ebd70df5 req-6144e84a-21f1-42bd-b069-b098ae168d97 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.578 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.589 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948369.467761, 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.590 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] VM Paused (Lifecycle Event)
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.659 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.671 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.755 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.775 188707 DEBUG nova.compute.manager [req-33ea1ff6-277e-4dc9-b9bd-fa4c2d0efcb7 req-18c7b284-4828-440b-8332-701203c5df06 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Received event network-vif-plugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.776 188707 DEBUG oslo_concurrency.lockutils [req-33ea1ff6-277e-4dc9-b9bd-fa4c2d0efcb7 req-18c7b284-4828-440b-8332-701203c5df06 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.777 188707 DEBUG oslo_concurrency.lockutils [req-33ea1ff6-277e-4dc9-b9bd-fa4c2d0efcb7 req-18c7b284-4828-440b-8332-701203c5df06 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.777 188707 DEBUG oslo_concurrency.lockutils [req-33ea1ff6-277e-4dc9-b9bd-fa4c2d0efcb7 req-18c7b284-4828-440b-8332-701203c5df06 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.778 188707 DEBUG nova.compute.manager [req-33ea1ff6-277e-4dc9-b9bd-fa4c2d0efcb7 req-18c7b284-4828-440b-8332-701203c5df06 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Processing event network-vif-plugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.778 188707 DEBUG nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.784 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948369.7839801, 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.784 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] VM Resumed (Lifecycle Event)
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.786 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.793 188707 INFO nova.virt.libvirt.driver [-] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Instance spawned successfully.
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.793 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.884 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.893 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.894 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.895 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.896 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.897 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.898 188707 DEBUG nova.virt.libvirt.driver [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:52:49 compute-0 nova_compute[188703]: 2026-02-24 15:52:49.904 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:52:50 compute-0 nova_compute[188703]: 2026-02-24 15:52:50.009 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:52:50 compute-0 nova_compute[188703]: 2026-02-24 15:52:50.115 188707 INFO nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Took 9.00 seconds to spawn the instance on the hypervisor.
Feb 24 15:52:50 compute-0 nova_compute[188703]: 2026-02-24 15:52:50.116 188707 DEBUG nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:52:50 compute-0 nova_compute[188703]: 2026-02-24 15:52:50.218 188707 INFO nova.compute.manager [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Took 9.67 seconds to build instance.
Feb 24 15:52:50 compute-0 nova_compute[188703]: 2026-02-24 15:52:50.372 188707 DEBUG oslo_concurrency.lockutils [None req-2f9ee75d-1f21-4586-9ec1-0d76a5b901ce bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:50 compute-0 nova_compute[188703]: 2026-02-24 15:52:50.901 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:50 compute-0 nova_compute[188703]: 2026-02-24 15:52:50.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:51 compute-0 podman[245104]: 2026-02-24 15:52:51.17924193 +0000 UTC m=+0.117510034 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 15:52:51 compute-0 podman[245124]: 2026-02-24 15:52:51.271626584 +0000 UTC m=+0.076301365 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, name=ubi9, config_id=kepler, io.buildah.version=1.29.0, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 24 15:52:51 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 24 15:52:51 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 24 15:52:51 compute-0 nova_compute[188703]: 2026-02-24 15:52:51.924 188707 DEBUG nova.compute.manager [req-d0d6a3d6-2ce5-40d8-8a0e-4b3448a5a16d req-fdeaa4ae-40b2-4dcb-8cbc-2afc0a648611 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Received event network-vif-plugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:52:51 compute-0 nova_compute[188703]: 2026-02-24 15:52:51.925 188707 DEBUG oslo_concurrency.lockutils [req-d0d6a3d6-2ce5-40d8-8a0e-4b3448a5a16d req-fdeaa4ae-40b2-4dcb-8cbc-2afc0a648611 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:51 compute-0 nova_compute[188703]: 2026-02-24 15:52:51.925 188707 DEBUG oslo_concurrency.lockutils [req-d0d6a3d6-2ce5-40d8-8a0e-4b3448a5a16d req-fdeaa4ae-40b2-4dcb-8cbc-2afc0a648611 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:51 compute-0 nova_compute[188703]: 2026-02-24 15:52:51.925 188707 DEBUG oslo_concurrency.lockutils [req-d0d6a3d6-2ce5-40d8-8a0e-4b3448a5a16d req-fdeaa4ae-40b2-4dcb-8cbc-2afc0a648611 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:51 compute-0 nova_compute[188703]: 2026-02-24 15:52:51.926 188707 DEBUG nova.compute.manager [req-d0d6a3d6-2ce5-40d8-8a0e-4b3448a5a16d req-fdeaa4ae-40b2-4dcb-8cbc-2afc0a648611 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] No waiting events found dispatching network-vif-plugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 15:52:51 compute-0 nova_compute[188703]: 2026-02-24 15:52:51.926 188707 WARNING nova.compute.manager [req-d0d6a3d6-2ce5-40d8-8a0e-4b3448a5a16d req-fdeaa4ae-40b2-4dcb-8cbc-2afc0a648611 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Received unexpected event network-vif-plugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 for instance with vm_state active and task_state None.
Feb 24 15:52:51 compute-0 nova_compute[188703]: 2026-02-24 15:52:51.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:51 compute-0 nova_compute[188703]: 2026-02-24 15:52:51.941 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:52:52 compute-0 nova_compute[188703]: 2026-02-24 15:52:52.696 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:52:52 compute-0 nova_compute[188703]: 2026-02-24 15:52:52.697 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:52:52 compute-0 nova_compute[188703]: 2026-02-24 15:52:52.699 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:52:52 compute-0 nova_compute[188703]: 2026-02-24 15:52:52.938 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:53 compute-0 nova_compute[188703]: 2026-02-24 15:52:53.021 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:55.711 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:55.712 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:52:55.713 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.028 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updating instance_info_cache with network_info: [{"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.047 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.047 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.048 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.049 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.050 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.050 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:52:56 compute-0 podman[245164]: 2026-02-24 15:52:56.15420712 +0000 UTC m=+0.105423202 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, container_name=openstack_network_exporter, distribution-scope=public, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, managed_by=edpm_ansible, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:56 compute-0 nova_compute[188703]: 2026-02-24 15:52:56.981 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:57 compute-0 nova_compute[188703]: 2026-02-24 15:52:57.942 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:58 compute-0 nova_compute[188703]: 2026-02-24 15:52:58.024 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:52:58 compute-0 nova_compute[188703]: 2026-02-24 15:52:58.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:52:58 compute-0 nova_compute[188703]: 2026-02-24 15:52:58.976 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:52:58 compute-0 nova_compute[188703]: 2026-02-24 15:52:58.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:52:58 compute-0 nova_compute[188703]: 2026-02-24 15:52:58.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:52:58 compute-0 nova_compute[188703]: 2026-02-24 15:52:58.977 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.137 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.200 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.202 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.251 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.252 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.301 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.302 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.363 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.371 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.417 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.419 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.489 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.490 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.539 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.540 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.615 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.623 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.692 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.693 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 podman[204685]: time="2026-02-24T15:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:52:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:52:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.740 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.767 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.833 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.836 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.900 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.911 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.961 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:52:59 compute-0 nova_compute[188703]: 2026-02-24 15:52:59.963 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.028 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.030 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.081 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.083 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.149 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.585 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.587 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4640MB free_disk=72.19447326660156GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.587 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.588 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.718 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.718 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.719 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.719 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.720 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.721 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.814 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.829 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.852 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:53:00 compute-0 nova_compute[188703]: 2026-02-24 15:53:00.852 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:53:01 compute-0 openstack_network_exporter[207830]: ERROR   15:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:53:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:53:01 compute-0 openstack_network_exporter[207830]: ERROR   15:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:53:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:53:02 compute-0 podman[245235]: 2026-02-24 15:53:02.165720801 +0000 UTC m=+0.100496888 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 15:53:02 compute-0 podman[245236]: 2026-02-24 15:53:02.200493004 +0000 UTC m=+0.140068193 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Feb 24 15:53:02 compute-0 nova_compute[188703]: 2026-02-24 15:53:02.945 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:03 compute-0 nova_compute[188703]: 2026-02-24 15:53:03.027 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:07 compute-0 nova_compute[188703]: 2026-02-24 15:53:07.947 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:08 compute-0 nova_compute[188703]: 2026-02-24 15:53:08.030 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:09 compute-0 podman[245278]: 2026-02-24 15:53:09.105617314 +0000 UTC m=+0.067914904 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:53:12 compute-0 nova_compute[188703]: 2026-02-24 15:53:12.949 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:13 compute-0 nova_compute[188703]: 2026-02-24 15:53:13.033 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:14 compute-0 sshd-session[245302]: Connection closed by authenticating user root 52.159.244.83 port 2072 [preauth]
Feb 24 15:53:17 compute-0 nova_compute[188703]: 2026-02-24 15:53:17.953 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:18 compute-0 nova_compute[188703]: 2026-02-24 15:53:18.035 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:19 compute-0 podman[245304]: 2026-02-24 15:53:19.125967056 +0000 UTC m=+0.080685904 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:53:19 compute-0 podman[245305]: 2026-02-24 15:53:19.153396158 +0000 UTC m=+0.102634916 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:53:19 compute-0 ovn_controller[98701]: 2026-02-24T15:53:19Z|00049|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Feb 24 15:53:22 compute-0 podman[245347]: 2026-02-24 15:53:22.135043937 +0000 UTC m=+0.081756503 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 24 15:53:22 compute-0 podman[245346]: 2026-02-24 15:53:22.152478175 +0000 UTC m=+0.096575450 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, config_id=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 24 15:53:22 compute-0 nova_compute[188703]: 2026-02-24 15:53:22.957 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:23 compute-0 nova_compute[188703]: 2026-02-24 15:53:23.040 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:23 compute-0 ovn_controller[98701]: 2026-02-24T15:53:23Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:29:21 192.168.0.42
Feb 24 15:53:23 compute-0 ovn_controller[98701]: 2026-02-24T15:53:23Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:29:21 192.168.0.42
Feb 24 15:53:27 compute-0 podman[245397]: 2026-02-24 15:53:27.139525887 +0000 UTC m=+0.096503448 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 15:53:27 compute-0 nova_compute[188703]: 2026-02-24 15:53:27.960 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:28 compute-0 nova_compute[188703]: 2026-02-24 15:53:28.042 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:29 compute-0 podman[204685]: time="2026-02-24T15:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:53:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:53:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Feb 24 15:53:31 compute-0 openstack_network_exporter[207830]: ERROR   15:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:53:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:53:31 compute-0 openstack_network_exporter[207830]: ERROR   15:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:53:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:53:32 compute-0 nova_compute[188703]: 2026-02-24 15:53:32.963 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:33 compute-0 nova_compute[188703]: 2026-02-24 15:53:33.046 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:33 compute-0 podman[245418]: 2026-02-24 15:53:33.178132401 +0000 UTC m=+0.129667558 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223)
Feb 24 15:53:33 compute-0 podman[245419]: 2026-02-24 15:53:33.178504211 +0000 UTC m=+0.129023870 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 24 15:53:37 compute-0 nova_compute[188703]: 2026-02-24 15:53:37.967 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:38 compute-0 nova_compute[188703]: 2026-02-24 15:53:38.050 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.830 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.830 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.837 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 15:53:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:39.838 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 15:53:40 compute-0 podman[245464]: 2026-02-24 15:53:40.152636017 +0000 UTC m=+0.106828051 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.890 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 24 Feb 2026 15:53:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c422bc51-f3a2-4536-aaf1-c8d90947a9e2 x-openstack-request-id: req-c422bc51-f3a2-4536-aaf1-c8d90947a9e2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.890 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354", "name": "vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh", "status": "ACTIVE", "tenant_id": "4407f5b870e145d8917119ad928717e8", "user_id": "bd338d866e3242aeb685fec99c451955", "metadata": {"metering.server_group": "105127c2-20fd-4471-8609-2ac19fea2fd2"}, "hostId": "781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62", "image": {"id": "de6b8fc8-e0dc-4bbf-943b-e6ac6027af11", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/de6b8fc8-e0dc-4bbf-943b-e6ac6027af11"}]}, "flavor": {"id": "521ca388-0b2e-40c6-bb06-118d4ed86b49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/521ca388-0b2e-40c6-bb06-118d4ed86b49"}]}, "created": "2026-02-24T15:52:39Z", "updated": "2026-02-24T15:52:50Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.42", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:57:29:21"}, {"version": 4, "addr": "192.168.122.172", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:57:29:21"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-24T15:52:50.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.890 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 used request id req-c422bc51-f3a2-4536-aaf1-c8d90947a9e2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.891 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354', 'name': 'vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.894 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'name': 'test_0', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.898 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5315fe0d-538a-4ea7-b3fe-92e5a13f1678', 'name': 'vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.902 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4e6fb5f9-248e-440a-9cd9-472a05ab19ee', 'name': 'vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.902 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T15:53:40.903348) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.929 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/memory.usage volume: 49.65625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.950 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:40.974 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.004 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T15:53:41.005970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.028 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.029 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.029 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.047 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.047 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.047 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.065 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.065 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.065 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.086 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.086 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.086 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.148 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.149 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.149 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T15:53:41.150627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.154 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 / tap34a110b8-bd inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.155 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.159 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.161 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.164 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.165 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.165 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.166 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.166 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T15:53:41.165643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.166 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.167 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.167 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.168 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.168 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T15:53:41.168127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.168 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.169 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.169 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.169 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.169 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.170 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.170 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.170 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.171 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T15:53:41.170512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.171 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.172 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T15:53:41.173319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.173 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.174 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.174 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.174 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.175 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.175 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.176 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.176 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.176 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.177 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.177 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.178 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.179 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.179 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T15:53:41.180186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.256 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.257 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.258 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.345 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.346 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.347 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.434 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.435 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.435 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.548 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.549 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.549 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.550 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.550 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.551 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.551 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.551 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.551 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.552 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.552 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.553 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.554 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 2224753847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T15:53:41.551210) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.554 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 114510394 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.555 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 94768043 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.555 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 691853245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.555 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124156741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.556 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124375245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.556 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 811206452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.556 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 179818558 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.557 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 156094626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.557 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 791899769 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.557 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 157289336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.557 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 260856202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.559 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.559 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/cpu volume: 33310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.560 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/cpu volume: 38210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.560 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/cpu volume: 33010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.560 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/cpu volume: 304120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.561 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.562 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.562 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.562 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.563 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.563 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.563 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.564 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.564 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.564 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.565 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.565 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.566 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.567 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T15:53:41.554271) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T15:53:41.559583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.567 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T15:53:41.562145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.568 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.568 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.568 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.570 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.570 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.571 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.571 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets volume: 62 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.573 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.573 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.573 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.574 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.574 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.574 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.574 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.575 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.575 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T15:53:41.567825) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.576 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T15:53:41.570504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T15:53:41.572922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.576 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.576 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.577 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.578 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 2439281559 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.578 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 10083548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.578 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.579 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 2170641399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.579 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 13738713 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.579 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.580 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 2328620032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.580 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 15976249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.580 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.581 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 2883365654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.581 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 18006907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.582 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T15:53:41.578224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.583 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.583 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.584 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.584 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.584 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.585 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.585 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.585 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.586 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.586 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.586 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.587 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.588 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T15:53:41.583556) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-24T15:53:41.588543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.588 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh>]
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.590 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.590 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.590 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.590 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.591 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.591 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.591 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.592 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.592 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.592 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.593 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.593 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.594 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.595 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.595 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.596 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.596 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.597 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T15:53:41.589860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.598 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T15:53:41.595313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.598 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.598 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.bytes.delta volume: 225 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.599 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes.delta volume: 2610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.600 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.600 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T15:53:41.597992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.601 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T15:53:41.600618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.603 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T15:53:41.602361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.604 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.604 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh>]
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-24T15:53:41.603969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.604 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.605 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.605 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.605 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.605 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.606 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.607 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.607 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.608 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.608 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.608 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes volume: 7304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.609 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.610 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T15:53:41.605149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.611 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T15:53:41.607775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.611 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.611 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.611 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.611 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.612 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.612 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.612 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.612 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.612 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.613 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.613 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.613 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.613 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.613 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.614 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.614 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.614 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.614 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.614 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.615 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.615 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:53:41.615 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:53:42 compute-0 nova_compute[188703]: 2026-02-24 15:53:42.970 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:43 compute-0 nova_compute[188703]: 2026-02-24 15:53:43.053 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:47 compute-0 nova_compute[188703]: 2026-02-24 15:53:47.974 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:48 compute-0 nova_compute[188703]: 2026-02-24 15:53:48.057 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:50 compute-0 podman[245490]: 2026-02-24 15:53:50.138213517 +0000 UTC m=+0.089192988 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:53:50 compute-0 podman[245491]: 2026-02-24 15:53:50.140820579 +0000 UTC m=+0.086416972 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 24 15:53:50 compute-0 nova_compute[188703]: 2026-02-24 15:53:50.862 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:53:50 compute-0 nova_compute[188703]: 2026-02-24 15:53:50.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:53:52 compute-0 nova_compute[188703]: 2026-02-24 15:53:52.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:53:52 compute-0 nova_compute[188703]: 2026-02-24 15:53:52.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:53:52 compute-0 nova_compute[188703]: 2026-02-24 15:53:52.978 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:53 compute-0 nova_compute[188703]: 2026-02-24 15:53:53.059 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:53 compute-0 podman[245532]: 2026-02-24 15:53:53.071671284 +0000 UTC m=+0.079231973 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 15:53:53 compute-0 podman[245531]: 2026-02-24 15:53:53.094492861 +0000 UTC m=+0.102946275 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, name=ubi9, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 15:53:53 compute-0 nova_compute[188703]: 2026-02-24 15:53:53.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:53:53 compute-0 nova_compute[188703]: 2026-02-24 15:53:53.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:53:54 compute-0 nova_compute[188703]: 2026-02-24 15:53:54.730 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:53:54 compute-0 nova_compute[188703]: 2026-02-24 15:53:54.731 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:53:54 compute-0 nova_compute[188703]: 2026-02-24 15:53:54.732 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:53:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:53:55.712 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:53:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:53:55.713 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:53:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:53:55.715 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:53:56 compute-0 nova_compute[188703]: 2026-02-24 15:53:56.793 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updating instance_info_cache with network_info: [{"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:53:56 compute-0 nova_compute[188703]: 2026-02-24 15:53:56.816 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:53:56 compute-0 nova_compute[188703]: 2026-02-24 15:53:56.817 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:53:56 compute-0 nova_compute[188703]: 2026-02-24 15:53:56.818 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:53:56 compute-0 nova_compute[188703]: 2026-02-24 15:53:56.818 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:53:57 compute-0 nova_compute[188703]: 2026-02-24 15:53:57.982 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:58 compute-0 nova_compute[188703]: 2026-02-24 15:53:58.062 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:53:58 compute-0 podman[245572]: 2026-02-24 15:53:58.176792005 +0000 UTC m=+0.122546582 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, container_name=openstack_network_exporter, vcs-type=git, version=9.7, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 15:53:58 compute-0 nova_compute[188703]: 2026-02-24 15:53:58.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:53:58 compute-0 nova_compute[188703]: 2026-02-24 15:53:58.945 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:53:59 compute-0 podman[204685]: time="2026-02-24T15:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:53:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:53:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Feb 24 15:53:59 compute-0 nova_compute[188703]: 2026-02-24 15:53:59.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:53:59 compute-0 nova_compute[188703]: 2026-02-24 15:53:59.966 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:53:59 compute-0 nova_compute[188703]: 2026-02-24 15:53:59.966 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:53:59 compute-0 nova_compute[188703]: 2026-02-24 15:53:59.967 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:53:59 compute-0 nova_compute[188703]: 2026-02-24 15:53:59.967 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.053 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.129 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.131 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.210 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.212 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.282 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.283 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.346 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.354 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.442 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.443 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.490 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.491 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.546 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.547 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.597 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.603 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.651 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.652 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.718 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.719 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.769 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.771 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.839 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.846 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.900 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.901 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.949 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:00 compute-0 nova_compute[188703]: 2026-02-24 15:54:00.951 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.016 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.017 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.105 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:54:01 compute-0 openstack_network_exporter[207830]: ERROR   15:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:54:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:54:01 compute-0 openstack_network_exporter[207830]: ERROR   15:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:54:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.536 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.537 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4571MB free_disk=72.17282104492188GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.537 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.538 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.630 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.631 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.631 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.632 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.632 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.633 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.728 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.744 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.745 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:54:01 compute-0 nova_compute[188703]: 2026-02-24 15:54:01.746 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:54:02 compute-0 nova_compute[188703]: 2026-02-24 15:54:02.985 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:03 compute-0 nova_compute[188703]: 2026-02-24 15:54:03.066 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:04 compute-0 podman[245639]: 2026-02-24 15:54:04.164493013 +0000 UTC m=+0.114576554 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 15:54:04 compute-0 podman[245640]: 2026-02-24 15:54:04.221149376 +0000 UTC m=+0.172465311 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 15:54:07 compute-0 nova_compute[188703]: 2026-02-24 15:54:07.988 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:08 compute-0 nova_compute[188703]: 2026-02-24 15:54:08.069 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:11 compute-0 podman[245683]: 2026-02-24 15:54:11.162570303 +0000 UTC m=+0.118085719 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 15:54:12 compute-0 nova_compute[188703]: 2026-02-24 15:54:12.990 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:13 compute-0 nova_compute[188703]: 2026-02-24 15:54:13.072 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:17 compute-0 nova_compute[188703]: 2026-02-24 15:54:17.993 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:18 compute-0 nova_compute[188703]: 2026-02-24 15:54:18.073 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:21 compute-0 podman[245709]: 2026-02-24 15:54:21.157097764 +0000 UTC m=+0.109774842 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 15:54:21 compute-0 podman[245710]: 2026-02-24 15:54:21.187456367 +0000 UTC m=+0.130251174 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 24 15:54:22 compute-0 nova_compute[188703]: 2026-02-24 15:54:22.995 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:23 compute-0 nova_compute[188703]: 2026-02-24 15:54:23.076 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:24 compute-0 podman[245752]: 2026-02-24 15:54:24.126651631 +0000 UTC m=+0.075023259 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 15:54:24 compute-0 podman[245751]: 2026-02-24 15:54:24.152120059 +0000 UTC m=+0.104947369 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, release=1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 24 15:54:28 compute-0 nova_compute[188703]: 2026-02-24 15:54:27.999 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:28 compute-0 nova_compute[188703]: 2026-02-24 15:54:28.080 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:29 compute-0 podman[245800]: 2026-02-24 15:54:29.137530647 +0000 UTC m=+0.086369400 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1770267347, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z)
Feb 24 15:54:29 compute-0 podman[204685]: time="2026-02-24T15:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:54:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:54:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 24 15:54:31 compute-0 openstack_network_exporter[207830]: ERROR   15:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:54:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:54:31 compute-0 openstack_network_exporter[207830]: ERROR   15:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:54:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:54:33 compute-0 nova_compute[188703]: 2026-02-24 15:54:33.001 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:33 compute-0 nova_compute[188703]: 2026-02-24 15:54:33.084 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:35 compute-0 podman[245822]: 2026-02-24 15:54:35.16335896 +0000 UTC m=+0.118134552 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825)
Feb 24 15:54:35 compute-0 podman[245823]: 2026-02-24 15:54:35.20349085 +0000 UTC m=+0.151590059 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:54:38 compute-0 nova_compute[188703]: 2026-02-24 15:54:38.005 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:38 compute-0 nova_compute[188703]: 2026-02-24 15:54:38.092 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:42 compute-0 podman[245865]: 2026-02-24 15:54:42.135154248 +0000 UTC m=+0.090258927 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 15:54:43 compute-0 nova_compute[188703]: 2026-02-24 15:54:43.010 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:43 compute-0 nova_compute[188703]: 2026-02-24 15:54:43.096 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:48 compute-0 nova_compute[188703]: 2026-02-24 15:54:48.014 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:48 compute-0 nova_compute[188703]: 2026-02-24 15:54:48.100 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:51 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 24 15:54:51 compute-0 podman[245891]: 2026-02-24 15:54:51.610523755 +0000 UTC m=+0.119211850 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:54:51 compute-0 podman[245890]: 2026-02-24 15:54:51.621309011 +0000 UTC m=+0.126363456 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:54:52 compute-0 nova_compute[188703]: 2026-02-24 15:54:52.746 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:54:52 compute-0 nova_compute[188703]: 2026-02-24 15:54:52.747 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:54:53 compute-0 nova_compute[188703]: 2026-02-24 15:54:53.017 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:53 compute-0 nova_compute[188703]: 2026-02-24 15:54:53.103 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:53 compute-0 nova_compute[188703]: 2026-02-24 15:54:53.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:54:53 compute-0 nova_compute[188703]: 2026-02-24 15:54:53.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:54:53 compute-0 nova_compute[188703]: 2026-02-24 15:54:53.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:54:54 compute-0 nova_compute[188703]: 2026-02-24 15:54:54.795 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:54:54 compute-0 nova_compute[188703]: 2026-02-24 15:54:54.796 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:54:54 compute-0 nova_compute[188703]: 2026-02-24 15:54:54.797 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:54:54 compute-0 nova_compute[188703]: 2026-02-24 15:54:54.798 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:54:55 compute-0 podman[245931]: 2026-02-24 15:54:55.146981042 +0000 UTC m=+0.096385875 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, config_id=kepler, name=ubi9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Feb 24 15:54:55 compute-0 podman[245932]: 2026-02-24 15:54:55.171145605 +0000 UTC m=+0.119221411 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 15:54:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:54:55.713 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:54:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:54:55.715 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:54:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:54:55.716 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:54:56 compute-0 nova_compute[188703]: 2026-02-24 15:54:56.972 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:54:56 compute-0 nova_compute[188703]: 2026-02-24 15:54:56.992 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:54:56 compute-0 nova_compute[188703]: 2026-02-24 15:54:56.993 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:54:56 compute-0 nova_compute[188703]: 2026-02-24 15:54:56.994 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:54:56 compute-0 nova_compute[188703]: 2026-02-24 15:54:56.995 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:54:56 compute-0 nova_compute[188703]: 2026-02-24 15:54:56.995 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:54:56 compute-0 nova_compute[188703]: 2026-02-24 15:54:56.996 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:54:58 compute-0 nova_compute[188703]: 2026-02-24 15:54:58.020 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:58 compute-0 nova_compute[188703]: 2026-02-24 15:54:58.105 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:54:59 compute-0 podman[204685]: time="2026-02-24T15:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:54:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:54:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Feb 24 15:54:59 compute-0 nova_compute[188703]: 2026-02-24 15:54:59.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:54:59 compute-0 nova_compute[188703]: 2026-02-24 15:54:59.945 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:55:00 compute-0 podman[245968]: 2026-02-24 15:55:00.127666866 +0000 UTC m=+0.082059468 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1770267347)
Feb 24 15:55:00 compute-0 nova_compute[188703]: 2026-02-24 15:55:00.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:55:00 compute-0 nova_compute[188703]: 2026-02-24 15:55:00.975 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:55:00 compute-0 nova_compute[188703]: 2026-02-24 15:55:00.976 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:55:00 compute-0 nova_compute[188703]: 2026-02-24 15:55:00.976 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:55:00 compute-0 nova_compute[188703]: 2026-02-24 15:55:00.976 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.137 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.212 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.213 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.275 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.277 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.328 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.330 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 openstack_network_exporter[207830]: ERROR   15:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:55:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:55:01 compute-0 openstack_network_exporter[207830]: ERROR   15:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:55:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.428 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.439 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.526 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.528 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.577 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.579 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.659 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.661 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.742 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.749 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.809 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.811 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.901 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.903 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.967 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:01 compute-0 nova_compute[188703]: 2026-02-24 15:55:01.969 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.039 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.050 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.131 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.132 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.207 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.208 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.284 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.286 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.339 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.828 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.830 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4566MB free_disk=72.17282104492188GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.831 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.832 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.923 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.924 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.924 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.924 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.925 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:55:02 compute-0 nova_compute[188703]: 2026-02-24 15:55:02.925 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:55:03 compute-0 nova_compute[188703]: 2026-02-24 15:55:03.024 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:03 compute-0 nova_compute[188703]: 2026-02-24 15:55:03.061 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:55:03 compute-0 nova_compute[188703]: 2026-02-24 15:55:03.079 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:55:03 compute-0 nova_compute[188703]: 2026-02-24 15:55:03.081 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:55:03 compute-0 nova_compute[188703]: 2026-02-24 15:55:03.081 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:55:03 compute-0 nova_compute[188703]: 2026-02-24 15:55:03.107 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:04 compute-0 nova_compute[188703]: 2026-02-24 15:55:04.078 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:55:06 compute-0 podman[246037]: 2026-02-24 15:55:06.169054948 +0000 UTC m=+0.117921146 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 24 15:55:06 compute-0 podman[246038]: 2026-02-24 15:55:06.230851173 +0000 UTC m=+0.166917513 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 24 15:55:08 compute-0 nova_compute[188703]: 2026-02-24 15:55:08.026 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:08 compute-0 nova_compute[188703]: 2026-02-24 15:55:08.109 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:13 compute-0 nova_compute[188703]: 2026-02-24 15:55:13.028 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:13 compute-0 nova_compute[188703]: 2026-02-24 15:55:13.112 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:13 compute-0 podman[246080]: 2026-02-24 15:55:13.162337757 +0000 UTC m=+0.116763765 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:55:18 compute-0 nova_compute[188703]: 2026-02-24 15:55:18.032 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:18 compute-0 nova_compute[188703]: 2026-02-24 15:55:18.114 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:22 compute-0 podman[246106]: 2026-02-24 15:55:22.13087934 +0000 UTC m=+0.084946167 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 15:55:22 compute-0 podman[246107]: 2026-02-24 15:55:22.155463691 +0000 UTC m=+0.101325484 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 24 15:55:23 compute-0 nova_compute[188703]: 2026-02-24 15:55:23.035 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:23 compute-0 nova_compute[188703]: 2026-02-24 15:55:23.117 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:26 compute-0 podman[246149]: 2026-02-24 15:55:26.16184839 +0000 UTC m=+0.109626280 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 24 15:55:26 compute-0 podman[246148]: 2026-02-24 15:55:26.164532553 +0000 UTC m=+0.115535991 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-container)
Feb 24 15:55:28 compute-0 nova_compute[188703]: 2026-02-24 15:55:28.038 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:28 compute-0 nova_compute[188703]: 2026-02-24 15:55:28.121 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:29 compute-0 podman[204685]: time="2026-02-24T15:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:55:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:55:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Feb 24 15:55:31 compute-0 podman[246185]: 2026-02-24 15:55:31.130892562 +0000 UTC m=+0.093509002 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 15:55:31 compute-0 openstack_network_exporter[207830]: ERROR   15:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:55:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:55:31 compute-0 openstack_network_exporter[207830]: ERROR   15:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:55:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:55:33 compute-0 nova_compute[188703]: 2026-02-24 15:55:33.040 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:33 compute-0 nova_compute[188703]: 2026-02-24 15:55:33.124 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:37 compute-0 podman[246205]: 2026-02-24 15:55:37.16204074 +0000 UTC m=+0.117895276 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, io.buildah.version=1.43.0)
Feb 24 15:55:37 compute-0 podman[246206]: 2026-02-24 15:55:37.197978659 +0000 UTC m=+0.149997891 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:55:38 compute-0 nova_compute[188703]: 2026-02-24 15:55:38.042 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:38 compute-0 nova_compute[188703]: 2026-02-24 15:55:38.126 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.830 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.832 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.832 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8f290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.845 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354', 'name': 'vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.851 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'name': 'test_0', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.858 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5315fe0d-538a-4ea7-b3fe-92e5a13f1678', 'name': 'vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.864 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4e6fb5f9-248e-440a-9cd9-472a05ab19ee', 'name': 'vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.864 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T15:55:39.865704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.904 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/memory.usage volume: 49.0390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.944 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:39.984 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.025 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.027 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T15:55:40.027486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.055 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.056 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.057 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.086 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.087 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.087 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.121 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.122 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.122 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.145 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.146 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.146 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.147 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.147 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.147 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.147 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T15:55:40.147638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.151 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.154 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.158 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.161 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.161 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.162 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.162 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.163 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T15:55:40.162244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.163 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.163 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.164 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.164 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.164 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.164 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.165 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T15:55:40.164272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T15:55:40.165796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.165 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.166 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.166 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.167 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.167 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.167 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.167 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.167 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.168 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.168 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T15:55:40.167694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.168 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.169 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.169 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.169 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.169 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.170 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.170 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.170 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.171 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.171 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.171 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.171 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.172 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T15:55:40.171623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.251 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.252 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.252 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.341 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.341 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.342 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.435 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.436 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.436 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.506 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.507 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.507 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T15:55:40.510205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.511 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.511 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.512 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.512 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.513 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T15:55:40.514327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.515 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 2224753847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.515 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 114510394 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.516 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 94768043 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.516 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 691853245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.516 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124156741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.517 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124375245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.517 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 811206452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.518 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 179818558 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.518 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 156094626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.519 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 791899769 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.519 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 157289336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.520 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.latency volume: 260856202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.522 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T15:55:40.522724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.523 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/cpu volume: 35030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.524 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/cpu volume: 39870000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.524 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/cpu volume: 34680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.525 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/cpu volume: 305800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.526 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.526 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T15:55:40.527057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.528 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.528 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.528 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.529 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.529 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.530 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.530 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.531 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.531 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.531 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.532 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.532 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.534 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T15:55:40.534964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.536 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.536 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.537 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.537 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.539 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T15:55:40.539787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.540 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.541 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.541 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.542 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets volume: 62 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.543 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.544 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T15:55:40.543897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.544 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.544 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.545 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.545 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.545 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.545 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.546 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.546 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.546 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.546 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.547 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.547 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T15:55:40.548172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.548 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 2487471190 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.548 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 10083548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.549 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.549 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 2170641399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.549 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 13738713 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.549 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.550 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 2328620032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.550 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 15976249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.550 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.550 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 2883365654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.551 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 18006907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.551 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.552 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.553 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T15:55:40.553109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.553 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.553 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.554 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.554 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.554 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.554 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.555 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.555 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.555 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.556 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.556 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.556 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.557 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T15:55:40.558063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.559 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.559 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.559 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.560 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.560 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.560 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.560 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.561 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.561 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.561 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.561 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.562 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.563 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T15:55:40.563431) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.563 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.564 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.564 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.564 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T15:55:40.565757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.566 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes.delta volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.567 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.567 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.567 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.569 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T15:55:40.569537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.571 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T15:55:40.571833) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.573 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.574 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.574 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.575 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.575 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T15:55:40.574282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.577 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.578 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T15:55:40.577585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.578 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.579 14 DEBUG ceilometer.compute.pollsters [-] 4e6fb5f9-248e-440a-9cd9-472a05ab19ee/network.outgoing.bytes volume: 7304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.580 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.581 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.584 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.584 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.584 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.584 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.584 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:55:40.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:55:43 compute-0 nova_compute[188703]: 2026-02-24 15:55:43.045 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:43 compute-0 nova_compute[188703]: 2026-02-24 15:55:43.129 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:44 compute-0 podman[246254]: 2026-02-24 15:55:44.167607962 +0000 UTC m=+0.125287267 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:55:48 compute-0 nova_compute[188703]: 2026-02-24 15:55:48.047 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:48 compute-0 nova_compute[188703]: 2026-02-24 15:55:48.131 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:50 compute-0 nova_compute[188703]: 2026-02-24 15:55:50.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:55:51 compute-0 nova_compute[188703]: 2026-02-24 15:55:51.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:55:53 compute-0 nova_compute[188703]: 2026-02-24 15:55:53.050 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:53 compute-0 nova_compute[188703]: 2026-02-24 15:55:53.134 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:53 compute-0 podman[246280]: 2026-02-24 15:55:53.154276539 +0000 UTC m=+0.104855460 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 24 15:55:53 compute-0 podman[246279]: 2026-02-24 15:55:53.170502491 +0000 UTC m=+0.122098030 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:55:54 compute-0 nova_compute[188703]: 2026-02-24 15:55:54.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:55:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:55:55.715 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:55:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:55:55.717 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:55:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:55:55.718 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:55:55 compute-0 nova_compute[188703]: 2026-02-24 15:55:55.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:55:55 compute-0 nova_compute[188703]: 2026-02-24 15:55:55.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:55:56 compute-0 nova_compute[188703]: 2026-02-24 15:55:56.870 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:55:56 compute-0 nova_compute[188703]: 2026-02-24 15:55:56.870 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:55:56 compute-0 nova_compute[188703]: 2026-02-24 15:55:56.871 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:55:57 compute-0 podman[246319]: 2026-02-24 15:55:57.131276496 +0000 UTC m=+0.080002882 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0)
Feb 24 15:55:57 compute-0 podman[246318]: 2026-02-24 15:55:57.131551344 +0000 UTC m=+0.082773368 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, distribution-scope=public, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 24 15:55:58 compute-0 nova_compute[188703]: 2026-02-24 15:55:58.052 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:58 compute-0 nova_compute[188703]: 2026-02-24 15:55:58.138 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:55:59 compute-0 podman[204685]: time="2026-02-24T15:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:55:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:55:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 24 15:55:59 compute-0 nova_compute[188703]: 2026-02-24 15:55:59.921 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updating instance_info_cache with network_info: [{"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:55:59 compute-0 nova_compute[188703]: 2026-02-24 15:55:59.942 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:55:59 compute-0 nova_compute[188703]: 2026-02-24 15:55:59.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:55:59 compute-0 nova_compute[188703]: 2026-02-24 15:55:59.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:55:59 compute-0 nova_compute[188703]: 2026-02-24 15:55:59.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:55:59 compute-0 nova_compute[188703]: 2026-02-24 15:55:59.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:56:00 compute-0 nova_compute[188703]: 2026-02-24 15:56:00.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:56:00 compute-0 nova_compute[188703]: 2026-02-24 15:56:00.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:56:01 compute-0 openstack_network_exporter[207830]: ERROR   15:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:56:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:56:01 compute-0 openstack_network_exporter[207830]: ERROR   15:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:56:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:56:01 compute-0 nova_compute[188703]: 2026-02-24 15:56:01.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:56:01 compute-0 nova_compute[188703]: 2026-02-24 15:56:01.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:56:01 compute-0 nova_compute[188703]: 2026-02-24 15:56:01.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:56:01 compute-0 nova_compute[188703]: 2026-02-24 15:56:01.978 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:56:01 compute-0 nova_compute[188703]: 2026-02-24 15:56:01.978 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.090 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 podman[246357]: 2026-02-24 15:56:02.105678683 +0000 UTC m=+0.062685761 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.140 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.142 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.189 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.191 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.242 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.243 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.297 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.305 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.394 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.395 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.464 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.466 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.548 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.549 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.624 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.638 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.697 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.699 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.786 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.788 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.847 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.848 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.905 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.911 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.966 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:02 compute-0 nova_compute[188703]: 2026-02-24 15:56:02.967 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.018 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.019 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.053 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.067 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.068 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.119 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.139 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:03 compute-0 sshd-session[246424]: Connection closed by authenticating user root 64.236.161.24 port 46112 [preauth]
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.498 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.500 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4581MB free_disk=72.17287826538086GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.500 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:56:03 compute-0 nova_compute[188703]: 2026-02-24 15:56:03.501 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.035 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.036 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.036 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.036 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.037 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.037 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.057 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.125 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.126 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.152 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.179 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.298 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.326 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.328 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:56:04 compute-0 nova_compute[188703]: 2026-02-24 15:56:04.328 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:56:08 compute-0 nova_compute[188703]: 2026-02-24 15:56:08.057 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:08 compute-0 podman[246428]: 2026-02-24 15:56:08.14914431 +0000 UTC m=+0.096341898 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 24 15:56:08 compute-0 nova_compute[188703]: 2026-02-24 15:56:08.146 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:08 compute-0 podman[246429]: 2026-02-24 15:56:08.193565712 +0000 UTC m=+0.138424636 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 24 15:56:13 compute-0 nova_compute[188703]: 2026-02-24 15:56:13.060 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:13 compute-0 nova_compute[188703]: 2026-02-24 15:56:13.154 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:14 compute-0 podman[246475]: 2026-02-24 15:56:14.779539249 +0000 UTC m=+0.079317344 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 15:56:18 compute-0 nova_compute[188703]: 2026-02-24 15:56:18.063 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:18 compute-0 nova_compute[188703]: 2026-02-24 15:56:18.156 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:23 compute-0 nova_compute[188703]: 2026-02-24 15:56:23.065 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:23 compute-0 nova_compute[188703]: 2026-02-24 15:56:23.160 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:24 compute-0 podman[246499]: 2026-02-24 15:56:24.139352779 +0000 UTC m=+0.089735499 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 15:56:24 compute-0 podman[246500]: 2026-02-24 15:56:24.187310177 +0000 UTC m=+0.127725624 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:56:28 compute-0 nova_compute[188703]: 2026-02-24 15:56:28.068 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:28 compute-0 podman[246538]: 2026-02-24 15:56:28.146272532 +0000 UTC m=+0.098064925 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=kepler, container_name=kepler, vcs-type=git, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.openshift.tags=base rhel9, name=ubi9, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 24 15:56:28 compute-0 podman[246539]: 2026-02-24 15:56:28.157413736 +0000 UTC m=+0.102024943 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 24 15:56:28 compute-0 nova_compute[188703]: 2026-02-24 15:56:28.165 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:29 compute-0 podman[204685]: time="2026-02-24T15:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:56:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:56:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 24 15:56:31 compute-0 openstack_network_exporter[207830]: ERROR   15:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:56:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:56:31 compute-0 openstack_network_exporter[207830]: ERROR   15:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:56:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:56:32 compute-0 nova_compute[188703]: 2026-02-24 15:56:32.965 188707 DEBUG oslo_concurrency.lockutils [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:56:32 compute-0 nova_compute[188703]: 2026-02-24 15:56:32.967 188707 DEBUG oslo_concurrency.lockutils [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:56:32 compute-0 nova_compute[188703]: 2026-02-24 15:56:32.968 188707 DEBUG oslo_concurrency.lockutils [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:56:32 compute-0 nova_compute[188703]: 2026-02-24 15:56:32.969 188707 DEBUG oslo_concurrency.lockutils [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:56:32 compute-0 nova_compute[188703]: 2026-02-24 15:56:32.970 188707 DEBUG oslo_concurrency.lockutils [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:56:32 compute-0 nova_compute[188703]: 2026-02-24 15:56:32.972 188707 INFO nova.compute.manager [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Terminating instance
Feb 24 15:56:32 compute-0 nova_compute[188703]: 2026-02-24 15:56:32.974 188707 DEBUG nova.compute.manager [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 15:56:33 compute-0 kernel: tap93527468-41 (unregistering): left promiscuous mode
Feb 24 15:56:33 compute-0 NetworkManager[56995]: <info>  [1771948593.0307] device (tap93527468-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 15:56:33 compute-0 ovn_controller[98701]: 2026-02-24T15:56:33Z|00050|binding|INFO|Releasing lport 93527468-4177-4f9e-a801-345f54dbe456 from this chassis (sb_readonly=0)
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.043 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 ovn_controller[98701]: 2026-02-24T15:56:33Z|00051|binding|INFO|Setting lport 93527468-4177-4f9e-a801-345f54dbe456 down in Southbound
Feb 24 15:56:33 compute-0 ovn_controller[98701]: 2026-02-24T15:56:33Z|00052|binding|INFO|Removing iface tap93527468-41 ovn-installed in OVS
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.050 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.056 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:32:ce 192.168.0.224'], port_security=['fa:16:3e:3a:32:ce 192.168.0.224'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-ifzd7ux27mgz-22n3finaao3u-a36yxyv7uiwf-port-trhq4wmeb2xv', 'neutron:cidrs': '192.168.0.224/24', 'neutron:device_id': '4e6fb5f9-248e-440a-9cd9-472a05ab19ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-863f062e-1672-4c9a-8889-3b2ee95f838a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-ifzd7ux27mgz-22n3finaao3u-a36yxyv7uiwf-port-trhq4wmeb2xv', 'neutron:project_id': '4407f5b870e145d8917119ad928717e8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9038fe38-7d22-46f5-bd37-0cab71bf22d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231de057-8460-4792-a8ff-f638ed53c1a8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=93527468-4177-4f9e-a801-345f54dbe456) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.057 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.057 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 93527468-4177-4f9e-a801-345f54dbe456 in datapath 863f062e-1672-4c9a-8889-3b2ee95f838a unbound from our chassis
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.060 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 863f062e-1672-4c9a-8889-3b2ee95f838a
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.071 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Feb 24 15:56:33 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 15.775s CPU time.
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.080 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[62fe16fc-99f1-4632-87b7-ef5a64d3c1c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:56:33 compute-0 systemd-machined[158049]: Machine qemu-2-instance-00000002 terminated.
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.109 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[082fb420-3735-4bb6-a0a5-135ac1cf7722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.112 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a37921-e6de-40be-93b0-50f33f7320e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.144 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b29a4d-ddda-4a46-8125-258ba61ad085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:56:33 compute-0 podman[246579]: 2026-02-24 15:56:33.164254466 +0000 UTC m=+0.113010803 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.165 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0af861-465c-4963-a6e9-d68c3a469ca7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap863f062e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:6f:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365300, 'reachable_time': 33062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246610, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.167 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.179 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[acf4de58-b623-4b31-9b31-84d51f30a875]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365310, 'tstamp': 365310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246611, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365313, 'tstamp': 365313}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246611, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.181 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap863f062e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.184 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.189 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.190 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap863f062e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.191 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.191 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap863f062e-10, col_values=(('external_ids', {'iface-id': 'e7d10e1c-8dfe-4042-832a-f76958f5496a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.192 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.200 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.205 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.249 188707 INFO nova.virt.libvirt.driver [-] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Instance destroyed successfully.
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.250 188707 DEBUG nova.objects.instance [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'resources' on Instance uuid 4e6fb5f9-248e-440a-9cd9-472a05ab19ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.285 188707 DEBUG nova.virt.libvirt.vif [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T15:46:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ux27mgz-22n3finaao3u-a36yxyv7uiwf-vnf-2bqsnpxsnyu2',id=2,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-24T15:46:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='105127c2-20fd-4471-8609-2ac19fea2fd2'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-00l0gxsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T15:46:52Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU5OTY5NDA2MzYzMjY3NjI1ODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTk5Njk0MDYzNjMyNjc2MjU4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU5OTY5NDA2MzYzMjY3NjI1ODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Feb 24 15:56:33 compute-0 nova_compute[188703]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTk5Njk0MDYzNjMyNjc2MjU4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU5OTY5NDA2MzYzMjY3NjI1ODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01OTk2OTQwNjM2MzI2NzYyNTg4PT0tLQo=',user_id='bd338d866e3242aeb685fec99c451955',uuid=4e6fb5f9-248e-440a-9cd9-472a05ab19ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.286 188707 DEBUG nova.network.os_vif_util [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "93527468-4177-4f9e-a801-345f54dbe456", "address": "fa:16:3e:3a:32:ce", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93527468-41", "ovs_interfaceid": "93527468-4177-4f9e-a801-345f54dbe456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.286 188707 DEBUG nova.network.os_vif_util [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:32:ce,bridge_name='br-int',has_traffic_filtering=True,id=93527468-4177-4f9e-a801-345f54dbe456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap93527468-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.287 188707 DEBUG os_vif [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:32:ce,bridge_name='br-int',has_traffic_filtering=True,id=93527468-4177-4f9e-a801-345f54dbe456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap93527468-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.289 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.289 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93527468-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.291 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.294 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.300 188707 INFO os_vif [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:32:ce,bridge_name='br-int',has_traffic_filtering=True,id=93527468-4177-4f9e-a801-345f54dbe456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap93527468-41')
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.301 188707 INFO nova.virt.libvirt.driver [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Deleting instance files /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee_del
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.302 188707 INFO nova.virt.libvirt.driver [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Deletion of /var/lib/nova/instances/4e6fb5f9-248e-440a-9cd9-472a05ab19ee_del complete
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.439 188707 DEBUG nova.virt.libvirt.host [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.440 188707 INFO nova.virt.libvirt.host [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] UEFI support detected
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.443 188707 INFO nova.compute.manager [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Took 0.47 seconds to destroy the instance on the hypervisor.
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.444 188707 DEBUG oslo.service.loopingcall [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.446 188707 DEBUG nova.compute.manager [-] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.447 188707 DEBUG nova.network.neutron [-] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.465 188707 DEBUG nova.compute.manager [req-45503bdb-66d5-45ab-ada7-551621428c60 req-89139a08-91db-4e76-9e1b-d7d572e85943 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Received event network-vif-unplugged-93527468-4177-4f9e-a801-345f54dbe456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.466 188707 DEBUG oslo_concurrency.lockutils [req-45503bdb-66d5-45ab-ada7-551621428c60 req-89139a08-91db-4e76-9e1b-d7d572e85943 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.467 188707 DEBUG oslo_concurrency.lockutils [req-45503bdb-66d5-45ab-ada7-551621428c60 req-89139a08-91db-4e76-9e1b-d7d572e85943 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.467 188707 DEBUG oslo_concurrency.lockutils [req-45503bdb-66d5-45ab-ada7-551621428c60 req-89139a08-91db-4e76-9e1b-d7d572e85943 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.468 188707 DEBUG nova.compute.manager [req-45503bdb-66d5-45ab-ada7-551621428c60 req-89139a08-91db-4e76-9e1b-d7d572e85943 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] No waiting events found dispatching network-vif-unplugged-93527468-4177-4f9e-a801-345f54dbe456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.468 188707 DEBUG nova.compute.manager [req-45503bdb-66d5-45ab-ada7-551621428c60 req-89139a08-91db-4e76-9e1b-d7d572e85943 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Received event network-vif-unplugged-93527468-4177-4f9e-a801-345f54dbe456 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 15:56:33 compute-0 rsyslogd[239437]: message too long (8192) with configured size 8096, begin of message is: 2026-02-24 15:56:33.285 188707 DEBUG nova.virt.libvirt.vif [None req-13154667-6e [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.644 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:56:33 compute-0 nova_compute[188703]: 2026-02-24 15:56:33.645 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:33.646 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 15:56:34 compute-0 nova_compute[188703]: 2026-02-24 15:56:34.625 188707 DEBUG nova.compute.manager [req-8d78f318-23d6-49d7-b490-22afe80bca8b req-0a742186-a482-4497-ace0-0c48d136fd86 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Received event network-changed-93527468-4177-4f9e-a801-345f54dbe456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:56:34 compute-0 nova_compute[188703]: 2026-02-24 15:56:34.626 188707 DEBUG nova.compute.manager [req-8d78f318-23d6-49d7-b490-22afe80bca8b req-0a742186-a482-4497-ace0-0c48d136fd86 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Refreshing instance network info cache due to event network-changed-93527468-4177-4f9e-a801-345f54dbe456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 15:56:34 compute-0 nova_compute[188703]: 2026-02-24 15:56:34.627 188707 DEBUG oslo_concurrency.lockutils [req-8d78f318-23d6-49d7-b490-22afe80bca8b req-0a742186-a482-4497-ace0-0c48d136fd86 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:56:34 compute-0 nova_compute[188703]: 2026-02-24 15:56:34.628 188707 DEBUG oslo_concurrency.lockutils [req-8d78f318-23d6-49d7-b490-22afe80bca8b req-0a742186-a482-4497-ace0-0c48d136fd86 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:56:34 compute-0 nova_compute[188703]: 2026-02-24 15:56:34.629 188707 DEBUG nova.network.neutron [req-8d78f318-23d6-49d7-b490-22afe80bca8b req-0a742186-a482-4497-ace0-0c48d136fd86 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Refreshing network info cache for port 93527468-4177-4f9e-a801-345f54dbe456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 15:56:34 compute-0 nova_compute[188703]: 2026-02-24 15:56:34.883 188707 INFO nova.network.neutron [req-8d78f318-23d6-49d7-b490-22afe80bca8b req-0a742186-a482-4497-ace0-0c48d136fd86 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Port 93527468-4177-4f9e-a801-345f54dbe456 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 24 15:56:34 compute-0 nova_compute[188703]: 2026-02-24 15:56:34.884 188707 DEBUG nova.network.neutron [req-8d78f318-23d6-49d7-b490-22afe80bca8b req-0a742186-a482-4497-ace0-0c48d136fd86 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:56:34 compute-0 nova_compute[188703]: 2026-02-24 15:56:34.929 188707 DEBUG oslo_concurrency.lockutils [req-8d78f318-23d6-49d7-b490-22afe80bca8b req-0a742186-a482-4497-ace0-0c48d136fd86 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-4e6fb5f9-248e-440a-9cd9-472a05ab19ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:56:35 compute-0 nova_compute[188703]: 2026-02-24 15:56:35.583 188707 DEBUG nova.compute.manager [req-aeef6b60-c1fc-464e-95ed-dc724acc7b3e req-05d7fd01-09a8-4d0e-a916-c56c6f266158 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Received event network-vif-plugged-93527468-4177-4f9e-a801-345f54dbe456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:56:35 compute-0 nova_compute[188703]: 2026-02-24 15:56:35.583 188707 DEBUG oslo_concurrency.lockutils [req-aeef6b60-c1fc-464e-95ed-dc724acc7b3e req-05d7fd01-09a8-4d0e-a916-c56c6f266158 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:56:35 compute-0 nova_compute[188703]: 2026-02-24 15:56:35.584 188707 DEBUG oslo_concurrency.lockutils [req-aeef6b60-c1fc-464e-95ed-dc724acc7b3e req-05d7fd01-09a8-4d0e-a916-c56c6f266158 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:56:35 compute-0 nova_compute[188703]: 2026-02-24 15:56:35.584 188707 DEBUG oslo_concurrency.lockutils [req-aeef6b60-c1fc-464e-95ed-dc724acc7b3e req-05d7fd01-09a8-4d0e-a916-c56c6f266158 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:56:35 compute-0 nova_compute[188703]: 2026-02-24 15:56:35.584 188707 DEBUG nova.compute.manager [req-aeef6b60-c1fc-464e-95ed-dc724acc7b3e req-05d7fd01-09a8-4d0e-a916-c56c6f266158 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] No waiting events found dispatching network-vif-plugged-93527468-4177-4f9e-a801-345f54dbe456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 15:56:35 compute-0 nova_compute[188703]: 2026-02-24 15:56:35.585 188707 WARNING nova.compute.manager [req-aeef6b60-c1fc-464e-95ed-dc724acc7b3e req-05d7fd01-09a8-4d0e-a916-c56c6f266158 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Received unexpected event network-vif-plugged-93527468-4177-4f9e-a801-345f54dbe456 for instance with vm_state active and task_state deleting.
Feb 24 15:56:36 compute-0 nova_compute[188703]: 2026-02-24 15:56:36.920 188707 DEBUG nova.network.neutron [-] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:56:36 compute-0 nova_compute[188703]: 2026-02-24 15:56:36.955 188707 INFO nova.compute.manager [-] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Took 3.51 seconds to deallocate network for instance.
Feb 24 15:56:37 compute-0 nova_compute[188703]: 2026-02-24 15:56:37.012 188707 DEBUG oslo_concurrency.lockutils [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:56:37 compute-0 nova_compute[188703]: 2026-02-24 15:56:37.012 188707 DEBUG oslo_concurrency.lockutils [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:56:37 compute-0 nova_compute[188703]: 2026-02-24 15:56:37.138 188707 DEBUG nova.compute.provider_tree [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:56:37 compute-0 nova_compute[188703]: 2026-02-24 15:56:37.154 188707 DEBUG nova.scheduler.client.report [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:56:37 compute-0 nova_compute[188703]: 2026-02-24 15:56:37.179 188707 DEBUG oslo_concurrency.lockutils [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:56:37 compute-0 nova_compute[188703]: 2026-02-24 15:56:37.209 188707 INFO nova.scheduler.client.report [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Deleted allocations for instance 4e6fb5f9-248e-440a-9cd9-472a05ab19ee
Feb 24 15:56:37 compute-0 nova_compute[188703]: 2026-02-24 15:56:37.300 188707 DEBUG oslo_concurrency.lockutils [None req-13154667-6eed-4c02-a8ac-0e1f958a59d3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "4e6fb5f9-248e-440a-9cd9-472a05ab19ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:56:37 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:37.649 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:56:38 compute-0 nova_compute[188703]: 2026-02-24 15:56:38.075 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:38 compute-0 nova_compute[188703]: 2026-02-24 15:56:38.293 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:39 compute-0 podman[246633]: 2026-02-24 15:56:39.145304204 +0000 UTC m=+0.103948306 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 24 15:56:39 compute-0 podman[246634]: 2026-02-24 15:56:39.149059276 +0000 UTC m=+0.106317370 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 24 15:56:43 compute-0 nova_compute[188703]: 2026-02-24 15:56:43.077 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:43 compute-0 nova_compute[188703]: 2026-02-24 15:56:43.295 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:45 compute-0 podman[246679]: 2026-02-24 15:56:45.124940315 +0000 UTC m=+0.078254805 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:56:48 compute-0 nova_compute[188703]: 2026-02-24 15:56:48.079 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:48 compute-0 nova_compute[188703]: 2026-02-24 15:56:48.246 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771948593.2444892, 4e6fb5f9-248e-440a-9cd9-472a05ab19ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:56:48 compute-0 nova_compute[188703]: 2026-02-24 15:56:48.246 188707 INFO nova.compute.manager [-] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] VM Stopped (Lifecycle Event)
Feb 24 15:56:48 compute-0 nova_compute[188703]: 2026-02-24 15:56:48.274 188707 DEBUG nova.compute.manager [None req-c798a947-7af6-4fe0-b514-074a0739d0d8 - - - - - -] [instance: 4e6fb5f9-248e-440a-9cd9-472a05ab19ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:56:48 compute-0 nova_compute[188703]: 2026-02-24 15:56:48.298 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:53 compute-0 nova_compute[188703]: 2026-02-24 15:56:53.083 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:53 compute-0 nova_compute[188703]: 2026-02-24 15:56:53.300 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:54 compute-0 nova_compute[188703]: 2026-02-24 15:56:54.328 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:56:54 compute-0 nova_compute[188703]: 2026-02-24 15:56:54.329 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:56:55 compute-0 podman[246704]: 2026-02-24 15:56:55.135696126 +0000 UTC m=+0.086369906 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 15:56:55 compute-0 podman[246705]: 2026-02-24 15:56:55.177375333 +0000 UTC m=+0.119011227 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 15:56:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:55.717 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:56:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:55.718 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:56:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:56:55.719 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:56:55 compute-0 nova_compute[188703]: 2026-02-24 15:56:55.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:56:56 compute-0 nova_compute[188703]: 2026-02-24 15:56:56.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:56:57 compute-0 nova_compute[188703]: 2026-02-24 15:56:57.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:56:57 compute-0 nova_compute[188703]: 2026-02-24 15:56:57.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:56:58 compute-0 nova_compute[188703]: 2026-02-24 15:56:58.087 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:58 compute-0 nova_compute[188703]: 2026-02-24 15:56:58.154 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:56:58 compute-0 nova_compute[188703]: 2026-02-24 15:56:58.155 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:56:58 compute-0 nova_compute[188703]: 2026-02-24 15:56:58.156 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:56:58 compute-0 nova_compute[188703]: 2026-02-24 15:56:58.302 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:56:59 compute-0 podman[246746]: 2026-02-24 15:56:59.148287315 +0000 UTC m=+0.100831701 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, container_name=kepler, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 15:56:59 compute-0 podman[246747]: 2026-02-24 15:56:59.159412318 +0000 UTC m=+0.107126201 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi)
Feb 24 15:56:59 compute-0 nova_compute[188703]: 2026-02-24 15:56:59.559 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updating instance_info_cache with network_info: [{"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:56:59 compute-0 nova_compute[188703]: 2026-02-24 15:56:59.577 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:56:59 compute-0 nova_compute[188703]: 2026-02-24 15:56:59.577 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:56:59 compute-0 nova_compute[188703]: 2026-02-24 15:56:59.578 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:56:59 compute-0 nova_compute[188703]: 2026-02-24 15:56:59.578 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:56:59 compute-0 podman[204685]: time="2026-02-24T15:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:56:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:56:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Feb 24 15:57:00 compute-0 nova_compute[188703]: 2026-02-24 15:57:00.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:00 compute-0 nova_compute[188703]: 2026-02-24 15:57:00.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:00 compute-0 nova_compute[188703]: 2026-02-24 15:57:00.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:00 compute-0 nova_compute[188703]: 2026-02-24 15:57:00.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 15:57:01 compute-0 nova_compute[188703]: 2026-02-24 15:57:01.060 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 15:57:01 compute-0 openstack_network_exporter[207830]: ERROR   15:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:57:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:57:01 compute-0 openstack_network_exporter[207830]: ERROR   15:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:57:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:57:02 compute-0 nova_compute[188703]: 2026-02-24 15:57:02.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:02 compute-0 nova_compute[188703]: 2026-02-24 15:57:02.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 15:57:03 compute-0 nova_compute[188703]: 2026-02-24 15:57:03.090 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:03 compute-0 nova_compute[188703]: 2026-02-24 15:57:03.305 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:03 compute-0 nova_compute[188703]: 2026-02-24 15:57:03.956 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:03 compute-0 nova_compute[188703]: 2026-02-24 15:57:03.982 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:57:03 compute-0 nova_compute[188703]: 2026-02-24 15:57:03.983 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:57:03 compute-0 nova_compute[188703]: 2026-02-24 15:57:03.984 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:57:03 compute-0 nova_compute[188703]: 2026-02-24 15:57:03.984 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.122 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 podman[246788]: 2026-02-24 15:57:04.160115221 +0000 UTC m=+0.108550171 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.194 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.196 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.248 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.250 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.309 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.310 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.391 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.404 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.475 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.477 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.533 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.535 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.594 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.596 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.649 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.659 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.706 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.708 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.762 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.763 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.831 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.832 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:57:04 compute-0 nova_compute[188703]: 2026-02-24 15:57:04.904 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.375 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.377 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4725MB free_disk=72.19523239135742GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.377 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.378 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.623 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.623 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.624 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.624 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.625 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.809 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.831 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.857 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:57:05 compute-0 nova_compute[188703]: 2026-02-24 15:57:05.858 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:57:06 compute-0 nova_compute[188703]: 2026-02-24 15:57:06.841 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:06 compute-0 nova_compute[188703]: 2026-02-24 15:57:06.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:07 compute-0 sshd-session[246845]: Invalid user ubnt from 185.156.73.233 port 28782
Feb 24 15:57:07 compute-0 sshd-session[246845]: Connection closed by invalid user ubnt 185.156.73.233 port 28782 [preauth]
Feb 24 15:57:08 compute-0 nova_compute[188703]: 2026-02-24 15:57:08.092 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:08 compute-0 nova_compute[188703]: 2026-02-24 15:57:08.308 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:08 compute-0 ovn_controller[98701]: 2026-02-24T15:57:08Z|00053|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Feb 24 15:57:10 compute-0 podman[246847]: 2026-02-24 15:57:10.127930114 +0000 UTC m=+0.080205664 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 15:57:10 compute-0 podman[246848]: 2026-02-24 15:57:10.153575169 +0000 UTC m=+0.107926926 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 15:57:13 compute-0 nova_compute[188703]: 2026-02-24 15:57:13.097 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:13 compute-0 nova_compute[188703]: 2026-02-24 15:57:13.310 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:16 compute-0 podman[246890]: 2026-02-24 15:57:16.130520654 +0000 UTC m=+0.082995741 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:57:18 compute-0 nova_compute[188703]: 2026-02-24 15:57:18.099 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:18 compute-0 nova_compute[188703]: 2026-02-24 15:57:18.312 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:23 compute-0 nova_compute[188703]: 2026-02-24 15:57:23.101 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:23 compute-0 nova_compute[188703]: 2026-02-24 15:57:23.316 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:26 compute-0 podman[246914]: 2026-02-24 15:57:26.137549118 +0000 UTC m=+0.082775615 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 24 15:57:26 compute-0 podman[246913]: 2026-02-24 15:57:26.146606207 +0000 UTC m=+0.097577352 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:57:28 compute-0 nova_compute[188703]: 2026-02-24 15:57:28.104 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:28 compute-0 nova_compute[188703]: 2026-02-24 15:57:28.318 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:29 compute-0 podman[204685]: time="2026-02-24T15:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:57:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:57:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 24 15:57:30 compute-0 podman[246956]: 2026-02-24 15:57:30.177569354 +0000 UTC m=+0.113921402 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 24 15:57:30 compute-0 podman[246955]: 2026-02-24 15:57:30.178956122 +0000 UTC m=+0.122415395 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, version=9.4, config_id=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 15:57:31 compute-0 openstack_network_exporter[207830]: ERROR   15:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:57:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:57:31 compute-0 openstack_network_exporter[207830]: ERROR   15:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:57:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:57:33 compute-0 nova_compute[188703]: 2026-02-24 15:57:33.109 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:33 compute-0 nova_compute[188703]: 2026-02-24 15:57:33.322 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:35 compute-0 podman[246994]: 2026-02-24 15:57:35.188744565 +0000 UTC m=+0.128909743 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.7, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, release=1770267347, io.buildah.version=1.33.7, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 24 15:57:38 compute-0 nova_compute[188703]: 2026-02-24 15:57:38.112 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:38 compute-0 nova_compute[188703]: 2026-02-24 15:57:38.325 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.831 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.831 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.831 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed50f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.843 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354', 'name': 'vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.847 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'name': 'test_0', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.852 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5315fe0d-538a-4ea7-b3fe-92e5a13f1678', 'name': 'vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.854 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T15:57:39.855160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.881 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/memory.usage volume: 49.0390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.914 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.943 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.945 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.945 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.946 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.946 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T15:57:39.946118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.969 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.971 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.971 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.996 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.997 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:39.997 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.021 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.024 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.026 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.027 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T15:57:40.027892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.033 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.037 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.042 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.044 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.044 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.044 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.045 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.045 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.046 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.046 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.046 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.047 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.047 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.047 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.048 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.048 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.049 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T15:57:40.044325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.049 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T15:57:40.045883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.049 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.049 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.049 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.050 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.050 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.050 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.050 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.051 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.051 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T15:57:40.047278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T15:57:40.048594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T15:57:40.051380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.130 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.131 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.131 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.225 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.226 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.226 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.295 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.295 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.296 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.296 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.296 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.297 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.297 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.297 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.297 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.298 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.298 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 2224753847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.299 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 114510394 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T15:57:40.297004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.299 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 94768043 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T15:57:40.298803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.299 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 691853245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.299 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124156741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.299 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124375245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.300 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 811206452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.300 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 179818558 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.300 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.latency volume: 156094626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.301 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.301 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/cpu volume: 36640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T15:57:40.301407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.301 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/cpu volume: 41550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.302 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/cpu volume: 36330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.302 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.302 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.303 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.303 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.303 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.303 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.304 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.304 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.304 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.304 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T15:57:40.302778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.305 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T15:57:40.305743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.306 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.306 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.307 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.307 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.307 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.308 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T15:57:40.307193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T15:57:40.308991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.309 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.310 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.310 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.310 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.310 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.311 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.311 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.311 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.312 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.312 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.313 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 2487471190 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T15:57:40.313258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.313 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 10083548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.314 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.314 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 2170641399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.314 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 13738713 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.314 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.315 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 2328620032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.315 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 15976249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.315 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.316 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.316 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.317 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.317 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T15:57:40.316630) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.317 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.317 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.318 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.318 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.318 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.318 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.319 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.319 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.319 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.320 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.320 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.321 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T15:57:40.320313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.321 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.321 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.321 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.322 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.322 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.322 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.323 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.323 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.323 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.323 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.324 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T15:57:40.323649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.324 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.325 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.325 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.325 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.326 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T15:57:40.325517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.327 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.328 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T15:57:40.327490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.330 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T15:57:40.328795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.330 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.330 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.331 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T15:57:40.330520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.332 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.332 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.333 14 DEBUG ceilometer.compute.pollsters [-] 5315fe0d-538a-4ea7-b3fe-92e5a13f1678/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T15:57:40.332472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:57:40.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:57:41 compute-0 podman[247016]: 2026-02-24 15:57:41.12601905 +0000 UTC m=+0.080244395 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 15:57:41 compute-0 podman[247017]: 2026-02-24 15:57:41.188123267 +0000 UTC m=+0.140606055 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 24 15:57:42 compute-0 sshd-session[247061]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 15:57:43 compute-0 nova_compute[188703]: 2026-02-24 15:57:43.113 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:43 compute-0 nova_compute[188703]: 2026-02-24 15:57:43.329 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:47 compute-0 podman[247063]: 2026-02-24 15:57:47.15435131 +0000 UTC m=+0.100830091 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.666 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.697 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Triggering sync for uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.698 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Triggering sync for uuid 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.698 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Triggering sync for uuid 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.699 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.700 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.700 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.700 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.701 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.701 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.754 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.756 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:57:47 compute-0 nova_compute[188703]: 2026-02-24 15:57:47.780 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:57:48 compute-0 nova_compute[188703]: 2026-02-24 15:57:48.117 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:48 compute-0 nova_compute[188703]: 2026-02-24 15:57:48.331 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:51 compute-0 nova_compute[188703]: 2026-02-24 15:57:51.977 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:53 compute-0 nova_compute[188703]: 2026-02-24 15:57:53.119 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:53 compute-0 nova_compute[188703]: 2026-02-24 15:57:53.335 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:53 compute-0 nova_compute[188703]: 2026-02-24 15:57:53.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:57:55.718 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:57:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:57:55.719 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:57:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:57:55.720 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:57:56 compute-0 nova_compute[188703]: 2026-02-24 15:57:56.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:57 compute-0 podman[247089]: 2026-02-24 15:57:57.415639151 +0000 UTC m=+0.086904530 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 15:57:57 compute-0 podman[247088]: 2026-02-24 15:57:57.447882586 +0000 UTC m=+0.113271803 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:57:58 compute-0 nova_compute[188703]: 2026-02-24 15:57:58.123 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:58 compute-0 nova_compute[188703]: 2026-02-24 15:57:58.338 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:57:58 compute-0 nova_compute[188703]: 2026-02-24 15:57:58.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:59 compute-0 podman[204685]: time="2026-02-24T15:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:57:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:57:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Feb 24 15:57:59 compute-0 nova_compute[188703]: 2026-02-24 15:57:59.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:57:59 compute-0 nova_compute[188703]: 2026-02-24 15:57:59.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:58:01 compute-0 podman[247131]: 2026-02-24 15:58:01.149884042 +0000 UTC m=+0.103449143 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, distribution-scope=public, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc.)
Feb 24 15:58:01 compute-0 podman[247132]: 2026-02-24 15:58:01.16148162 +0000 UTC m=+0.113739635 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi)
Feb 24 15:58:01 compute-0 openstack_network_exporter[207830]: ERROR   15:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:58:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:58:01 compute-0 openstack_network_exporter[207830]: ERROR   15:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:58:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:58:02 compute-0 nova_compute[188703]: 2026-02-24 15:58:02.921 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:58:02 compute-0 nova_compute[188703]: 2026-02-24 15:58:02.922 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:58:02 compute-0 nova_compute[188703]: 2026-02-24 15:58:02.923 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:58:03 compute-0 nova_compute[188703]: 2026-02-24 15:58:03.125 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:03 compute-0 nova_compute[188703]: 2026-02-24 15:58:03.340 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.375 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updating instance_info_cache with network_info: [{"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.406 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.407 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.408 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.409 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.409 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.410 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.443 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.444 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.445 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.446 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.551 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.606 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.608 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.685 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.686 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.742 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.743 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.795 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.801 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.854 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.855 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.910 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.911 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.963 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:05 compute-0 nova_compute[188703]: 2026-02-24 15:58:05.964 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.031 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.038 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.085 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.086 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:06 compute-0 podman[247192]: 2026-02-24 15:58:06.109714735 +0000 UTC m=+0.074783675 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.136 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.137 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.210 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.211 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.275 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.636 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.638 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4728MB free_disk=72.19535064697266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.638 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.638 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.752 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.753 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.753 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.754 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.755 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.867 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.882 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.883 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:58:06 compute-0 nova_compute[188703]: 2026-02-24 15:58:06.884 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:58:08 compute-0 nova_compute[188703]: 2026-02-24 15:58:08.128 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:08 compute-0 nova_compute[188703]: 2026-02-24 15:58:08.345 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:08 compute-0 nova_compute[188703]: 2026-02-24 15:58:08.879 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:58:12 compute-0 podman[247225]: 2026-02-24 15:58:12.151130514 +0000 UTC m=+0.108833681 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 24 15:58:12 compute-0 podman[247226]: 2026-02-24 15:58:12.186159717 +0000 UTC m=+0.134204078 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 24 15:58:13 compute-0 nova_compute[188703]: 2026-02-24 15:58:13.130 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:13 compute-0 nova_compute[188703]: 2026-02-24 15:58:13.348 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:18 compute-0 nova_compute[188703]: 2026-02-24 15:58:18.132 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:18 compute-0 podman[247270]: 2026-02-24 15:58:18.140885483 +0000 UTC m=+0.095669050 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 15:58:18 compute-0 nova_compute[188703]: 2026-02-24 15:58:18.351 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:23 compute-0 nova_compute[188703]: 2026-02-24 15:58:23.134 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:23 compute-0 nova_compute[188703]: 2026-02-24 15:58:23.353 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:28 compute-0 podman[247295]: 2026-02-24 15:58:28.112741891 +0000 UTC m=+0.068609726 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 24 15:58:28 compute-0 podman[247294]: 2026-02-24 15:58:28.132286248 +0000 UTC m=+0.092601205 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 15:58:28 compute-0 nova_compute[188703]: 2026-02-24 15:58:28.137 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:28 compute-0 nova_compute[188703]: 2026-02-24 15:58:28.356 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:29 compute-0 podman[204685]: time="2026-02-24T15:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:58:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:58:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Feb 24 15:58:31 compute-0 openstack_network_exporter[207830]: ERROR   15:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:58:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:58:31 compute-0 openstack_network_exporter[207830]: ERROR   15:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:58:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:58:32 compute-0 podman[247336]: 2026-02-24 15:58:32.157188586 +0000 UTC m=+0.112934353 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 15:58:32 compute-0 podman[247335]: 2026-02-24 15:58:32.166105631 +0000 UTC m=+0.119623828 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.139 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.359 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.559 188707 DEBUG oslo_concurrency.lockutils [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.561 188707 DEBUG oslo_concurrency.lockutils [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.561 188707 DEBUG oslo_concurrency.lockutils [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.562 188707 DEBUG oslo_concurrency.lockutils [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.563 188707 DEBUG oslo_concurrency.lockutils [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.567 188707 INFO nova.compute.manager [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Terminating instance
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.570 188707 DEBUG nova.compute.manager [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 15:58:33 compute-0 kernel: tap7d447097-3e (unregistering): left promiscuous mode
Feb 24 15:58:33 compute-0 NetworkManager[56995]: <info>  [1771948713.6149] device (tap7d447097-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 15:58:33 compute-0 ovn_controller[98701]: 2026-02-24T15:58:33Z|00054|binding|INFO|Releasing lport 7d447097-3ec6-4be0-a7c0-25faabfb8456 from this chassis (sb_readonly=0)
Feb 24 15:58:33 compute-0 ovn_controller[98701]: 2026-02-24T15:58:33Z|00055|binding|INFO|Setting lport 7d447097-3ec6-4be0-a7c0-25faabfb8456 down in Southbound
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.628 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 ovn_controller[98701]: 2026-02-24T15:58:33Z|00056|binding|INFO|Removing iface tap7d447097-3e ovn-installed in OVS
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.632 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.637 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.644 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:53:2a 192.168.0.231'], port_security=['fa:16:3e:b0:53:2a 192.168.0.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-ifzd7ux27mgz-fxjcpj7jbkrv-cstggnamwujf-port-klsqv3gcbm73', 'neutron:cidrs': '192.168.0.231/24', 'neutron:device_id': '5315fe0d-538a-4ea7-b3fe-92e5a13f1678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-863f062e-1672-4c9a-8889-3b2ee95f838a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-ifzd7ux27mgz-fxjcpj7jbkrv-cstggnamwujf-port-klsqv3gcbm73', 'neutron:project_id': '4407f5b870e145d8917119ad928717e8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9038fe38-7d22-46f5-bd37-0cab71bf22d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231de057-8460-4792-a8ff-f638ed53c1a8, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=7d447097-3ec6-4be0-a7c0-25faabfb8456) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.648 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 7d447097-3ec6-4be0-a7c0-25faabfb8456 in datapath 863f062e-1672-4c9a-8889-3b2ee95f838a unbound from our chassis
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.649 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 863f062e-1672-4c9a-8889-3b2ee95f838a
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.665 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[78176458-d8f8-4880-880f-3e6dc8de4677]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:58:33 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb 24 15:58:33 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 31.698s CPU time.
Feb 24 15:58:33 compute-0 systemd-machined[158049]: Machine qemu-3-instance-00000003 terminated.
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.697 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[d34b5098-44e1-443f-80d0-defe9b5225d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.701 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5c5fef-bda6-4a4d-ab53-26a370fa0ec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.736 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ddc9d5-2d60-4017-9b4c-55bd791699eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.758 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c26148-0e69-4948-84e1-229c887a36d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap863f062e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:6f:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365300, 'reachable_time': 24498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247384, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.775 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[8206b635-b6cf-4520-ae00-a52aded6abda]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365310, 'tstamp': 365310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247385, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365313, 'tstamp': 365313}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247385, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.777 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap863f062e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.780 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.785 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.786 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap863f062e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.787 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.788 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap863f062e-10, col_values=(('external_ids', {'iface-id': 'e7d10e1c-8dfe-4042-832a-f76958f5496a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:58:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:33.788 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.854 188707 INFO nova.virt.libvirt.driver [-] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Instance destroyed successfully.
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.856 188707 DEBUG nova.objects.instance [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'resources' on Instance uuid 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.873 188707 DEBUG nova.virt.libvirt.vif [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T15:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ux27mgz-fxjcpj7jbkrv-cstggnamwujf-vnf-vvcv7cqvcsla',id=3,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-24T15:50:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='105127c2-20fd-4471-8609-2ac19fea2fd2'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-7wd0g29f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T15:50:48Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc5MTk1NDQzOTI5ODM0MDg1Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzkxOTU0NDM5Mjk4MzQwODUzOD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc5MTk1NDQzOTI5ODM0MDg1Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Feb 24 15:58:33 compute-0 nova_compute[188703]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzkxOTU0NDM5Mjk4MzQwODUzOD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc5MTk1NDQzOTI5ODM0MDg1Mzg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03OTE5NTQ0MzkyOTgzNDA4NTM4PT0tLQo=',user_id='bd338d866e3242aeb685fec99c451955',uuid=5315fe0d-538a-4ea7-b3fe-92e5a13f1678,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.873 188707 DEBUG nova.network.os_vif_util [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.874 188707 DEBUG nova.network.os_vif_util [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:53:2a,bridge_name='br-int',has_traffic_filtering=True,id=7d447097-3ec6-4be0-a7c0-25faabfb8456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7d447097-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.874 188707 DEBUG os_vif [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:53:2a,bridge_name='br-int',has_traffic_filtering=True,id=7d447097-3ec6-4be0-a7c0-25faabfb8456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7d447097-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.876 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.876 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d447097-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.878 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.879 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.882 188707 INFO os_vif [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:53:2a,bridge_name='br-int',has_traffic_filtering=True,id=7d447097-3ec6-4be0-a7c0-25faabfb8456,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7d447097-3e')
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.882 188707 INFO nova.virt.libvirt.driver [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Deleting instance files /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678_del
Feb 24 15:58:33 compute-0 nova_compute[188703]: 2026-02-24 15:58:33.883 188707 INFO nova.virt.libvirt.driver [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Deletion of /var/lib/nova/instances/5315fe0d-538a-4ea7-b3fe-92e5a13f1678_del complete
Feb 24 15:58:34 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:34.128 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 15:58:34 compute-0 rsyslogd[239437]: message too long (8192) with configured size 8096, begin of message is: 2026-02-24 15:58:33.873 188707 DEBUG nova.virt.libvirt.vif [None req-3b7daa14-54 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 24 15:58:34 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:34.129 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.129 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:34 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:34.130 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.142 188707 INFO nova.compute.manager [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.143 188707 DEBUG oslo.service.loopingcall [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.143 188707 DEBUG nova.compute.manager [-] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.143 188707 DEBUG nova.network.neutron [-] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.167 188707 DEBUG nova.compute.manager [req-b86d7be2-f7cf-4e8f-89f9-8410214929ba req-8fdad6da-ef70-4caa-989b-3ec44b6d661c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Received event network-vif-unplugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.168 188707 DEBUG oslo_concurrency.lockutils [req-b86d7be2-f7cf-4e8f-89f9-8410214929ba req-8fdad6da-ef70-4caa-989b-3ec44b6d661c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.168 188707 DEBUG oslo_concurrency.lockutils [req-b86d7be2-f7cf-4e8f-89f9-8410214929ba req-8fdad6da-ef70-4caa-989b-3ec44b6d661c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.168 188707 DEBUG oslo_concurrency.lockutils [req-b86d7be2-f7cf-4e8f-89f9-8410214929ba req-8fdad6da-ef70-4caa-989b-3ec44b6d661c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.169 188707 DEBUG nova.compute.manager [req-b86d7be2-f7cf-4e8f-89f9-8410214929ba req-8fdad6da-ef70-4caa-989b-3ec44b6d661c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] No waiting events found dispatching network-vif-unplugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.169 188707 DEBUG nova.compute.manager [req-b86d7be2-f7cf-4e8f-89f9-8410214929ba req-8fdad6da-ef70-4caa-989b-3ec44b6d661c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Received event network-vif-unplugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.629 188707 DEBUG nova.compute.manager [req-4ab1ff47-605d-4802-a247-4ebbcb5fa521 req-348d134f-ed0d-47dd-9b0e-abc7cc241335 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Received event network-changed-7d447097-3ec6-4be0-a7c0-25faabfb8456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.629 188707 DEBUG nova.compute.manager [req-4ab1ff47-605d-4802-a247-4ebbcb5fa521 req-348d134f-ed0d-47dd-9b0e-abc7cc241335 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Refreshing instance network info cache due to event network-changed-7d447097-3ec6-4be0-a7c0-25faabfb8456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.630 188707 DEBUG oslo_concurrency.lockutils [req-4ab1ff47-605d-4802-a247-4ebbcb5fa521 req-348d134f-ed0d-47dd-9b0e-abc7cc241335 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.630 188707 DEBUG oslo_concurrency.lockutils [req-4ab1ff47-605d-4802-a247-4ebbcb5fa521 req-348d134f-ed0d-47dd-9b0e-abc7cc241335 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:58:34 compute-0 nova_compute[188703]: 2026-02-24 15:58:34.631 188707 DEBUG nova.network.neutron [req-4ab1ff47-605d-4802-a247-4ebbcb5fa521 req-348d134f-ed0d-47dd-9b0e-abc7cc241335 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Refreshing network info cache for port 7d447097-3ec6-4be0-a7c0-25faabfb8456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 15:58:35 compute-0 nova_compute[188703]: 2026-02-24 15:58:35.356 188707 DEBUG nova.network.neutron [-] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:58:35 compute-0 nova_compute[188703]: 2026-02-24 15:58:35.373 188707 INFO nova.compute.manager [-] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Took 1.23 seconds to deallocate network for instance.
Feb 24 15:58:35 compute-0 nova_compute[188703]: 2026-02-24 15:58:35.415 188707 DEBUG oslo_concurrency.lockutils [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:58:35 compute-0 nova_compute[188703]: 2026-02-24 15:58:35.415 188707 DEBUG oslo_concurrency.lockutils [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:58:35 compute-0 nova_compute[188703]: 2026-02-24 15:58:35.542 188707 DEBUG nova.compute.provider_tree [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:58:35 compute-0 nova_compute[188703]: 2026-02-24 15:58:35.560 188707 DEBUG nova.scheduler.client.report [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:58:35 compute-0 nova_compute[188703]: 2026-02-24 15:58:35.583 188707 DEBUG oslo_concurrency.lockutils [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:58:35 compute-0 nova_compute[188703]: 2026-02-24 15:58:35.622 188707 INFO nova.scheduler.client.report [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Deleted allocations for instance 5315fe0d-538a-4ea7-b3fe-92e5a13f1678
Feb 24 15:58:35 compute-0 nova_compute[188703]: 2026-02-24 15:58:35.697 188707 DEBUG oslo_concurrency.lockutils [None req-3b7daa14-5409-4ffd-a672-66c5fc538101 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:58:36 compute-0 nova_compute[188703]: 2026-02-24 15:58:36.079 188707 DEBUG nova.network.neutron [req-4ab1ff47-605d-4802-a247-4ebbcb5fa521 req-348d134f-ed0d-47dd-9b0e-abc7cc241335 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updated VIF entry in instance network info cache for port 7d447097-3ec6-4be0-a7c0-25faabfb8456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 15:58:36 compute-0 nova_compute[188703]: 2026-02-24 15:58:36.080 188707 DEBUG nova.network.neutron [req-4ab1ff47-605d-4802-a247-4ebbcb5fa521 req-348d134f-ed0d-47dd-9b0e-abc7cc241335 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Updating instance_info_cache with network_info: [{"id": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "address": "fa:16:3e:b0:53:2a", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d447097-3e", "ovs_interfaceid": "7d447097-3ec6-4be0-a7c0-25faabfb8456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:58:36 compute-0 nova_compute[188703]: 2026-02-24 15:58:36.111 188707 DEBUG oslo_concurrency.lockutils [req-4ab1ff47-605d-4802-a247-4ebbcb5fa521 req-348d134f-ed0d-47dd-9b0e-abc7cc241335 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-5315fe0d-538a-4ea7-b3fe-92e5a13f1678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:58:36 compute-0 nova_compute[188703]: 2026-02-24 15:58:36.268 188707 DEBUG nova.compute.manager [req-44bfc3d8-e6a1-4eb5-a8d5-cfac70b53da1 req-293abe19-e681-4e82-bd72-1e058c10b2ac 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Received event network-vif-plugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 15:58:36 compute-0 nova_compute[188703]: 2026-02-24 15:58:36.269 188707 DEBUG oslo_concurrency.lockutils [req-44bfc3d8-e6a1-4eb5-a8d5-cfac70b53da1 req-293abe19-e681-4e82-bd72-1e058c10b2ac 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:58:36 compute-0 nova_compute[188703]: 2026-02-24 15:58:36.269 188707 DEBUG oslo_concurrency.lockutils [req-44bfc3d8-e6a1-4eb5-a8d5-cfac70b53da1 req-293abe19-e681-4e82-bd72-1e058c10b2ac 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:58:36 compute-0 nova_compute[188703]: 2026-02-24 15:58:36.270 188707 DEBUG oslo_concurrency.lockutils [req-44bfc3d8-e6a1-4eb5-a8d5-cfac70b53da1 req-293abe19-e681-4e82-bd72-1e058c10b2ac 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "5315fe0d-538a-4ea7-b3fe-92e5a13f1678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:58:36 compute-0 nova_compute[188703]: 2026-02-24 15:58:36.270 188707 DEBUG nova.compute.manager [req-44bfc3d8-e6a1-4eb5-a8d5-cfac70b53da1 req-293abe19-e681-4e82-bd72-1e058c10b2ac 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] No waiting events found dispatching network-vif-plugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 15:58:36 compute-0 nova_compute[188703]: 2026-02-24 15:58:36.270 188707 WARNING nova.compute.manager [req-44bfc3d8-e6a1-4eb5-a8d5-cfac70b53da1 req-293abe19-e681-4e82-bd72-1e058c10b2ac 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Received unexpected event network-vif-plugged-7d447097-3ec6-4be0-a7c0-25faabfb8456 for instance with vm_state deleted and task_state None.
Feb 24 15:58:37 compute-0 podman[247408]: 2026-02-24 15:58:37.142254734 +0000 UTC m=+0.095830525 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.7, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, release=1770267347, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 15:58:38 compute-0 nova_compute[188703]: 2026-02-24 15:58:38.143 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:38 compute-0 nova_compute[188703]: 2026-02-24 15:58:38.879 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:43 compute-0 nova_compute[188703]: 2026-02-24 15:58:43.144 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:43 compute-0 podman[247429]: 2026-02-24 15:58:43.147340328 +0000 UTC m=+0.085942083 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute)
Feb 24 15:58:43 compute-0 podman[247430]: 2026-02-24 15:58:43.192736065 +0000 UTC m=+0.127797282 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Feb 24 15:58:43 compute-0 sshd-session[247473]: Connection closed by authenticating user root 172.214.45.193 port 24584 [preauth]
Feb 24 15:58:43 compute-0 nova_compute[188703]: 2026-02-24 15:58:43.882 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:48 compute-0 nova_compute[188703]: 2026-02-24 15:58:48.147 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:48 compute-0 nova_compute[188703]: 2026-02-24 15:58:48.852 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771948713.850844, 5315fe0d-538a-4ea7-b3fe-92e5a13f1678 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:58:48 compute-0 nova_compute[188703]: 2026-02-24 15:58:48.853 188707 INFO nova.compute.manager [-] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] VM Stopped (Lifecycle Event)
Feb 24 15:58:48 compute-0 nova_compute[188703]: 2026-02-24 15:58:48.877 188707 DEBUG nova.compute.manager [None req-62226eff-d21d-4c38-8b4b-f47eed5e4333 - - - - - -] [instance: 5315fe0d-538a-4ea7-b3fe-92e5a13f1678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:58:48 compute-0 nova_compute[188703]: 2026-02-24 15:58:48.884 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:49 compute-0 podman[247475]: 2026-02-24 15:58:49.120544263 +0000 UTC m=+0.072575225 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:58:53 compute-0 nova_compute[188703]: 2026-02-24 15:58:53.150 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:53 compute-0 nova_compute[188703]: 2026-02-24 15:58:53.888 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:53 compute-0 nova_compute[188703]: 2026-02-24 15:58:53.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:58:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:55.720 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:58:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:55.720 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:58:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:58:55.721 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:58:55 compute-0 nova_compute[188703]: 2026-02-24 15:58:55.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:58:58 compute-0 nova_compute[188703]: 2026-02-24 15:58:58.152 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:58 compute-0 nova_compute[188703]: 2026-02-24 15:58:58.891 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:58:58 compute-0 nova_compute[188703]: 2026-02-24 15:58:58.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:58:58 compute-0 nova_compute[188703]: 2026-02-24 15:58:58.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:58:59 compute-0 podman[247500]: 2026-02-24 15:58:59.117873554 +0000 UTC m=+0.076666638 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 15:58:59 compute-0 podman[247501]: 2026-02-24 15:58:59.133487463 +0000 UTC m=+0.080858113 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 24 15:58:59 compute-0 podman[204685]: time="2026-02-24T15:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:58:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:58:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Feb 24 15:58:59 compute-0 nova_compute[188703]: 2026-02-24 15:58:59.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:58:59 compute-0 nova_compute[188703]: 2026-02-24 15:58:59.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 15:58:59 compute-0 nova_compute[188703]: 2026-02-24 15:58:59.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 15:59:01 compute-0 nova_compute[188703]: 2026-02-24 15:59:01.001 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:59:01 compute-0 nova_compute[188703]: 2026-02-24 15:59:01.002 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:59:01 compute-0 nova_compute[188703]: 2026-02-24 15:59:01.003 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 15:59:01 compute-0 nova_compute[188703]: 2026-02-24 15:59:01.003 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:59:01 compute-0 openstack_network_exporter[207830]: ERROR   15:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:59:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:59:01 compute-0 openstack_network_exporter[207830]: ERROR   15:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:59:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:59:03 compute-0 podman[247540]: 2026-02-24 15:59:03.114443754 +0000 UTC m=+0.076030300 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., config_id=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, architecture=x86_64, version=9.4, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Feb 24 15:59:03 compute-0 podman[247541]: 2026-02-24 15:59:03.133832337 +0000 UTC m=+0.083499055 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 24 15:59:03 compute-0 nova_compute[188703]: 2026-02-24 15:59:03.153 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:03 compute-0 nova_compute[188703]: 2026-02-24 15:59:03.893 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.019 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.038 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.039 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.040 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.040 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.040 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.041 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.077 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.078 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.079 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.079 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.227 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.310 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.311 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.368 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.369 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.423 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.425 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.477 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.486 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.565 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.567 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:06 compute-0 ovn_controller[98701]: 2026-02-24T15:59:06Z|00057|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.632 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.633 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.698 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.699 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:06 compute-0 nova_compute[188703]: 2026-02-24 15:59:06.778 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.206 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.207 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4888MB free_disk=72.21729278564453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.207 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.208 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.325 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.326 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.326 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.327 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.425 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.447 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.470 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 15:59:07 compute-0 nova_compute[188703]: 2026-02-24 15:59:07.470 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:08 compute-0 podman[247606]: 2026-02-24 15:59:08.144832155 +0000 UTC m=+0.108413289 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, container_name=openstack_network_exporter)
Feb 24 15:59:08 compute-0 nova_compute[188703]: 2026-02-24 15:59:08.155 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:08 compute-0 nova_compute[188703]: 2026-02-24 15:59:08.895 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:11 compute-0 nova_compute[188703]: 2026-02-24 15:59:11.468 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:59:11 compute-0 nova_compute[188703]: 2026-02-24 15:59:11.468 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:59:11 compute-0 sshd-session[247627]: Accepted publickey for zuul from 38.102.83.66 port 49274 ssh2: RSA SHA256:NJTfdsSIVB6mH9/ClrbKw1e6GvsHFWYkptASszhoj5w
Feb 24 15:59:11 compute-0 systemd-logind[813]: New session 29 of user zuul.
Feb 24 15:59:11 compute-0 systemd[1]: Started Session 29 of User zuul.
Feb 24 15:59:11 compute-0 sshd-session[247627]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 15:59:12 compute-0 sudo[247804]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hywztusrytzzgntdfropzncavqhgsswm ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771948751.6634269-59088-45843604094225/AnsiballZ_command.py'
Feb 24 15:59:12 compute-0 sudo[247804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 15:59:12 compute-0 python3[247807]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 15:59:12 compute-0 sudo[247804]: pam_unix(sudo:session): session closed for user root
Feb 24 15:59:13 compute-0 nova_compute[188703]: 2026-02-24 15:59:13.159 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:13 compute-0 nova_compute[188703]: 2026-02-24 15:59:13.899 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:14 compute-0 podman[247847]: 2026-02-24 15:59:14.125875484 +0000 UTC m=+0.072773471 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.license=GPLv2)
Feb 24 15:59:14 compute-0 podman[247848]: 2026-02-24 15:59:14.184522345 +0000 UTC m=+0.130253880 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 15:59:18 compute-0 nova_compute[188703]: 2026-02-24 15:59:18.161 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:18 compute-0 nova_compute[188703]: 2026-02-24 15:59:18.901 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:20 compute-0 podman[247893]: 2026-02-24 15:59:20.119764927 +0000 UTC m=+0.072793891 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 15:59:23 compute-0 nova_compute[188703]: 2026-02-24 15:59:23.163 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:23 compute-0 nova_compute[188703]: 2026-02-24 15:59:23.903 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.165 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.417 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "e86936dc-53ea-4101-81d9-99a20ee3b8fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.419 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "e86936dc-53ea-4101-81d9-99a20ee3b8fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.438 188707 DEBUG nova.compute.manager [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.520 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.521 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.534 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.536 188707 INFO nova.compute.claims [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Claim successful on node compute-0.ctlplane.example.com
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.702 188707 DEBUG nova.compute.provider_tree [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.719 188707 DEBUG nova.scheduler.client.report [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.741 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.741 188707 DEBUG nova.compute.manager [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.820 188707 DEBUG nova.compute.manager [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.838 188707 INFO nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.895 188707 DEBUG nova.compute.manager [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 15:59:28 compute-0 nova_compute[188703]: 2026-02-24 15:59:28.906 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:29 compute-0 nova_compute[188703]: 2026-02-24 15:59:29.018 188707 DEBUG nova.compute.manager [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 15:59:29 compute-0 nova_compute[188703]: 2026-02-24 15:59:29.020 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 15:59:29 compute-0 nova_compute[188703]: 2026-02-24 15:59:29.021 188707 INFO nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Creating image(s)
Feb 24 15:59:29 compute-0 nova_compute[188703]: 2026-02-24 15:59:29.022 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:29 compute-0 nova_compute[188703]: 2026-02-24 15:59:29.022 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:29 compute-0 nova_compute[188703]: 2026-02-24 15:59:29.023 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:29 compute-0 nova_compute[188703]: 2026-02-24 15:59:29.024 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "b0586a242fd806d9546514e047f78171947acd4f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:29 compute-0 nova_compute[188703]: 2026-02-24 15:59:29.025 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0586a242fd806d9546514e047f78171947acd4f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:29 compute-0 podman[204685]: time="2026-02-24T15:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:59:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:59:29 compute-0 podman[204685]: @ - - [24/Feb/2026:15:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Feb 24 15:59:30 compute-0 podman[247916]: 2026-02-24 15:59:30.133990323 +0000 UTC m=+0.086121496 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.164 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:30 compute-0 podman[247917]: 2026-02-24 15:59:30.186615079 +0000 UTC m=+0.132690086 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.240 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f.part --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.241 188707 DEBUG nova.virt.images [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] 5aa47f01-f7e6-42e4-82de-74027f4796c3 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.243 188707 DEBUG nova.privsep.utils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.243 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f.part /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.449 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f.part /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f.converted" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.455 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.508 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f.converted --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.509 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0586a242fd806d9546514e047f78171947acd4f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.524 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.573 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.574 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "b0586a242fd806d9546514e047f78171947acd4f" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.574 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0586a242fd806d9546514e047f78171947acd4f" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.588 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.636 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.637 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f,backing_fmt=raw /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.675 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f,backing_fmt=raw /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.677 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "b0586a242fd806d9546514e047f78171947acd4f" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.678 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.741 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.742 188707 DEBUG nova.virt.disk.api [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Checking if we can resize image /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.742 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.820 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.821 188707 DEBUG nova.virt.disk.api [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Cannot resize image /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.822 188707 DEBUG nova.objects.instance [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'migration_context' on Instance uuid e86936dc-53ea-4101-81d9-99a20ee3b8fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.842 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.843 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.845 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.875 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.952 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.954 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.955 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:30 compute-0 nova_compute[188703]: 2026-02-24 15:59:30.969 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.056 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.057 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.103 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.eph0 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.104 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.105 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.184 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.185 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.185 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Ensure instance console log exists: /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.186 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.186 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.186 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.188 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:59:16Z,direct_url=<?>,disk_format='qcow2',id=5aa47f01-f7e6-42e4-82de-74027f4796c3,min_disk=0,min_ram=0,name='fvt_testing_image',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:59:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': '5aa47f01-f7e6-42e4-82de-74027f4796c3'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.194 188707 WARNING nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.200 188707 DEBUG nova.virt.libvirt.host [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.201 188707 DEBUG nova.virt.libvirt.host [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.204 188707 DEBUG nova.virt.libvirt.host [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.205 188707 DEBUG nova.virt.libvirt.host [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.205 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.205 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T15:59:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='83bffd23-6ac3-43b1-8178-4f0d4ea12134',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-24T15:59:16Z,direct_url=<?>,disk_format='qcow2',id=5aa47f01-f7e6-42e4-82de-74027f4796c3,min_disk=0,min_ram=0,name='fvt_testing_image',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-24T15:59:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.206 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.206 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.206 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.206 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.207 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.207 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.207 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.207 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.207 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.207 188707 DEBUG nova.virt.hardware [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.212 188707 DEBUG nova.objects.instance [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid e86936dc-53ea-4101-81d9-99a20ee3b8fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.230 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] End _get_guest_xml xml=<domain type="kvm">
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <uuid>e86936dc-53ea-4101-81d9-99a20ee3b8fd</uuid>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <name>instance-00000005</name>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <memory>524288</memory>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <metadata>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <nova:name>fvt_testing_server</nova:name>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 15:59:31</nova:creationTime>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <nova:flavor name="fvt_testing_flavor">
Feb 24 15:59:31 compute-0 nova_compute[188703]:         <nova:memory>512</nova:memory>
Feb 24 15:59:31 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 15:59:31 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 15:59:31 compute-0 nova_compute[188703]:         <nova:ephemeral>1</nova:ephemeral>
Feb 24 15:59:31 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 15:59:31 compute-0 nova_compute[188703]:         <nova:user uuid="bd338d866e3242aeb685fec99c451955">admin</nova:user>
Feb 24 15:59:31 compute-0 nova_compute[188703]:         <nova:project uuid="4407f5b870e145d8917119ad928717e8">admin</nova:project>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="5aa47f01-f7e6-42e4-82de-74027f4796c3"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <nova:ports/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   </metadata>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <system>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <entry name="serial">e86936dc-53ea-4101-81d9-99a20ee3b8fd</entry>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <entry name="uuid">e86936dc-53ea-4101-81d9-99a20ee3b8fd</entry>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     </system>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <os>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   </os>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <features>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <apic/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   </features>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   </clock>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   </cpu>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   <devices>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.eph0"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <target dev="vdb" bus="virtio"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.config"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     </disk>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/console.log" append="off"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     </serial>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <video>
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     </video>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     </rng>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 15:59:31 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 15:59:31 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 15:59:31 compute-0 nova_compute[188703]:   </devices>
Feb 24 15:59:31 compute-0 nova_compute[188703]: </domain>
Feb 24 15:59:31 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.279 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.280 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.280 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.281 188707 INFO nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Using config drive
Feb 24 15:59:31 compute-0 openstack_network_exporter[207830]: ERROR   15:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 15:59:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:59:31 compute-0 openstack_network_exporter[207830]: ERROR   15:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 15:59:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.422 188707 INFO nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Creating config drive at /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.config
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.426 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzlgydqoq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 15:59:31 compute-0 nova_compute[188703]: 2026-02-24 15:59:31.560 188707 DEBUG oslo_concurrency.processutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzlgydqoq" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 15:59:31 compute-0 systemd-machined[158049]: New machine qemu-5-instance-00000005.
Feb 24 15:59:31 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.234 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948772.2337205, e86936dc-53ea-4101-81d9-99a20ee3b8fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.234 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] VM Resumed (Lifecycle Event)
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.238 188707 DEBUG nova.compute.manager [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.238 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.243 188707 INFO nova.virt.libvirt.driver [-] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Instance spawned successfully.
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.243 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.258 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.266 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.272 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.272 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.273 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.273 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.274 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.274 188707 DEBUG nova.virt.libvirt.driver [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.284 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.285 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771948772.239258, e86936dc-53ea-4101-81d9-99a20ee3b8fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.285 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] VM Started (Lifecycle Event)
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.309 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.315 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.340 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.346 188707 INFO nova.compute.manager [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Took 3.33 seconds to spawn the instance on the hypervisor.
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.347 188707 DEBUG nova.compute.manager [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.404 188707 INFO nova.compute.manager [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Took 3.92 seconds to build instance.
Feb 24 15:59:32 compute-0 nova_compute[188703]: 2026-02-24 15:59:32.419 188707 DEBUG oslo_concurrency.lockutils [None req-459278e2-1073-4a0e-82a2-837cdcec4890 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "e86936dc-53ea-4101-81d9-99a20ee3b8fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:33 compute-0 nova_compute[188703]: 2026-02-24 15:59:33.168 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:33 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 24 15:59:33 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 24 15:59:33 compute-0 podman[248027]: 2026-02-24 15:59:33.388967388 +0000 UTC m=+0.088687868 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 15:59:33 compute-0 podman[248026]: 2026-02-24 15:59:33.41815292 +0000 UTC m=+0.124697617 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release=1214.1726694543, container_name=kepler, name=ubi9, vcs-type=git, maintainer=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64)
Feb 24 15:59:33 compute-0 nova_compute[188703]: 2026-02-24 15:59:33.908 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:38 compute-0 nova_compute[188703]: 2026-02-24 15:59:38.172 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:38 compute-0 nova_compute[188703]: 2026-02-24 15:59:38.911 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:39 compute-0 podman[248086]: 2026-02-24 15:59:39.160611674 +0000 UTC m=+0.109481099 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.7, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, vendor=Red Hat, Inc.)
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.832 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.833 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.833 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.848 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354', 'name': 'vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.854 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'name': 'test_0', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.858 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e86936dc-53ea-4101-81d9-99a20ee3b8fd from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 15:59:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:39.862 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e86936dc-53ea-4101-81d9-99a20ee3b8fd -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.018 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Tue, 24 Feb 2026 15:59:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-11f153bc-940c-40ad-9e26-240ea1f28f6c x-openstack-request-id: req-11f153bc-940c-40ad-9e26-240ea1f28f6c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.018 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e86936dc-53ea-4101-81d9-99a20ee3b8fd", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "4407f5b870e145d8917119ad928717e8", "user_id": "bd338d866e3242aeb685fec99c451955", "metadata": {}, "hostId": "781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62", "image": {"id": "5aa47f01-f7e6-42e4-82de-74027f4796c3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/5aa47f01-f7e6-42e4-82de-74027f4796c3"}]}, "flavor": {"id": "83bffd23-6ac3-43b1-8178-4f0d4ea12134", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/83bffd23-6ac3-43b1-8178-4f0d4ea12134"}]}, "created": "2026-02-24T15:59:27Z", "updated": "2026-02-24T15:59:32Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e86936dc-53ea-4101-81d9-99a20ee3b8fd"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e86936dc-53ea-4101-81d9-99a20ee3b8fd"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-24T15:59:32.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.018 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e86936dc-53ea-4101-81d9-99a20ee3b8fd used request id req-11f153bc-940c-40ad-9e26-240ea1f28f6c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.020 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e86936dc-53ea-4101-81d9-99a20ee3b8fd', 'name': 'fvt_testing_server', 'flavor': {'id': '83bffd23-6ac3-43b1-8178-4f0d4ea12134', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '5aa47f01-f7e6-42e4-82de-74027f4796c3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.020 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T15:59:41.020860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.057 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.091 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.149 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.150 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance e86936dc-53ea-4101-81d9-99a20ee3b8fd: ceilometer.compute.pollsters.NoVolumeException
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.150 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T15:59:41.150753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.190 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.191 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.191 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.225 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.226 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.226 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.267 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.268 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.268 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.269 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T15:59:41.269706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.275 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.280 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.284 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.285 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.285 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.285 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.286 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.286 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.287 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.287 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.288 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.288 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.289 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.289 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.290 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.290 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.291 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.291 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T15:59:41.285653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T15:59:41.287573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.292 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T15:59:41.289269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T15:59:41.291386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.292 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.292 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.293 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.293 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.293 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.293 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.294 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.294 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.294 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.295 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.295 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T15:59:41.295225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.406 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.407 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.408 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.504 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.505 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.505 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.607 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.607 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.608 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.609 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.609 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.609 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.609 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.610 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.610 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.611 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.612 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.612 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.612 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.613 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 2224753847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.613 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 114510394 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.614 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 94768043 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.615 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 691853245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.615 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124156741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.616 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124375245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T15:59:41.610527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T15:59:41.613209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.616 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.read.latency volume: 616338695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.617 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.617 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.read.latency volume: 3393933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.619 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.620 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/cpu volume: 38120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T15:59:41.619963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.621 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/cpu volume: 43000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.621 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/cpu volume: 8540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.622 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T15:59:41.622897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.623 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.624 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.624 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.624 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.625 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.625 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.626 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.626 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.627 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.628 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.629 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.630 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.631 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T15:59:41.629344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T15:59:41.632493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.632 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.633 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.634 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.635 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T15:59:41.634990) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.635 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.635 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.636 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.636 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.636 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.636 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.636 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.637 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.637 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.638 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 2487471190 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.638 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 10083548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.638 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.639 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 2170641399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.639 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 13738713 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.639 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.640 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.640 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.640 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.641 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T15:59:41.638211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.642 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T15:59:41.641907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.642 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.642 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.642 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.643 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.643 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.643 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.643 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.644 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.644 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-24T15:59:41.645261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.645 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.645 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.646 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.646 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.646 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T15:59:41.646339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.646 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.647 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.647 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.647 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.647 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.648 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.648 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.648 14 DEBUG ceilometer.compute.pollsters [-] e86936dc-53ea-4101-81d9-99a20ee3b8fd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.649 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.650 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.650 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.651 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.651 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.651 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T15:59:41.649956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.652 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T15:59:41.651514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.652 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.654 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T15:59:41.652892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.655 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T15:59:41.654418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.656 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.656 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-24T15:59:41.655700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.656 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.656 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T15:59:41.656697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.657 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.657 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.657 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.657 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.658 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.658 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.658 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.658 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T15:59:41.658268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.659 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 15:59:41.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 15:59:43 compute-0 nova_compute[188703]: 2026-02-24 15:59:43.174 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:43 compute-0 nova_compute[188703]: 2026-02-24 15:59:43.914 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:44 compute-0 podman[248109]: 2026-02-24 15:59:44.818385251 +0000 UTC m=+0.131757250 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 15:59:44 compute-0 podman[248108]: 2026-02-24 15:59:44.820698154 +0000 UTC m=+0.134358321 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 15:59:46 compute-0 nova_compute[188703]: 2026-02-24 15:59:46.224 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "e86936dc-53ea-4101-81d9-99a20ee3b8fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:46 compute-0 nova_compute[188703]: 2026-02-24 15:59:46.226 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "e86936dc-53ea-4101-81d9-99a20ee3b8fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:46 compute-0 nova_compute[188703]: 2026-02-24 15:59:46.227 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "e86936dc-53ea-4101-81d9-99a20ee3b8fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:46 compute-0 nova_compute[188703]: 2026-02-24 15:59:46.227 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "e86936dc-53ea-4101-81d9-99a20ee3b8fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:46 compute-0 nova_compute[188703]: 2026-02-24 15:59:46.228 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "e86936dc-53ea-4101-81d9-99a20ee3b8fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:46 compute-0 nova_compute[188703]: 2026-02-24 15:59:46.232 188707 INFO nova.compute.manager [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Terminating instance
Feb 24 15:59:46 compute-0 nova_compute[188703]: 2026-02-24 15:59:46.234 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "refresh_cache-e86936dc-53ea-4101-81d9-99a20ee3b8fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 15:59:46 compute-0 nova_compute[188703]: 2026-02-24 15:59:46.235 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquired lock "refresh_cache-e86936dc-53ea-4101-81d9-99a20ee3b8fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 15:59:46 compute-0 nova_compute[188703]: 2026-02-24 15:59:46.236 188707 DEBUG nova.network.neutron [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 15:59:47 compute-0 nova_compute[188703]: 2026-02-24 15:59:47.203 188707 DEBUG nova.network.neutron [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.030 188707 DEBUG nova.network.neutron [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.052 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Releasing lock "refresh_cache-e86936dc-53ea-4101-81d9-99a20ee3b8fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.053 188707 DEBUG nova.compute.manager [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 15:59:48 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb 24 15:59:48 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.864s CPU time.
Feb 24 15:59:48 compute-0 systemd-machined[158049]: Machine qemu-5-instance-00000005 terminated.
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.177 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.318 188707 INFO nova.virt.libvirt.driver [-] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Instance destroyed successfully.
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.318 188707 DEBUG nova.objects.instance [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'resources' on Instance uuid e86936dc-53ea-4101-81d9-99a20ee3b8fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.335 188707 INFO nova.virt.libvirt.driver [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Deleting instance files /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd_del
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.337 188707 INFO nova.virt.libvirt.driver [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Deletion of /var/lib/nova/instances/e86936dc-53ea-4101-81d9-99a20ee3b8fd_del complete
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.391 188707 INFO nova.compute.manager [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Took 0.34 seconds to destroy the instance on the hypervisor.
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.392 188707 DEBUG oslo.service.loopingcall [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.393 188707 DEBUG nova.compute.manager [-] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.394 188707 DEBUG nova.network.neutron [-] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.519 188707 DEBUG nova.network.neutron [-] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.540 188707 DEBUG nova.network.neutron [-] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.554 188707 INFO nova.compute.manager [-] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Took 0.16 seconds to deallocate network for instance.
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.613 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.614 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.710 188707 DEBUG nova.compute.provider_tree [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.729 188707 DEBUG nova.scheduler.client.report [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.755 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.795 188707 INFO nova.scheduler.client.report [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Deleted allocations for instance e86936dc-53ea-4101-81d9-99a20ee3b8fd
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.863 188707 DEBUG oslo_concurrency.lockutils [None req-3035ce9a-7462-45d3-873e-08ebffb87ca3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "e86936dc-53ea-4101-81d9-99a20ee3b8fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:48 compute-0 nova_compute[188703]: 2026-02-24 15:59:48.917 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:51 compute-0 podman[248166]: 2026-02-24 15:59:51.129925382 +0000 UTC m=+0.084991037 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 15:59:53 compute-0 nova_compute[188703]: 2026-02-24 15:59:53.179 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:53 compute-0 nova_compute[188703]: 2026-02-24 15:59:53.921 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:53 compute-0 nova_compute[188703]: 2026-02-24 15:59:53.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:59:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:59:55.722 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 15:59:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:59:55.723 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 15:59:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 15:59:55.724 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 15:59:57 compute-0 nova_compute[188703]: 2026-02-24 15:59:57.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:59:58 compute-0 nova_compute[188703]: 2026-02-24 15:59:58.183 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:58 compute-0 nova_compute[188703]: 2026-02-24 15:59:58.925 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 15:59:59 compute-0 podman[204685]: time="2026-02-24T15:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 15:59:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 15:59:59 compute-0 podman[204685]: @ - - [24/Feb/2026:15:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Feb 24 15:59:59 compute-0 nova_compute[188703]: 2026-02-24 15:59:59.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 15:59:59 compute-0 nova_compute[188703]: 2026-02-24 15:59:59.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:00:01 compute-0 nova_compute[188703]: 2026-02-24 16:00:01.038 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:00:01 compute-0 nova_compute[188703]: 2026-02-24 16:00:01.039 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:00:01 compute-0 nova_compute[188703]: 2026-02-24 16:00:01.039 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:00:01 compute-0 podman[248192]: 2026-02-24 16:00:01.122287797 +0000 UTC m=+0.071373522 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 16:00:01 compute-0 podman[248191]: 2026-02-24 16:00:01.122821283 +0000 UTC m=+0.080790271 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:00:01 compute-0 openstack_network_exporter[207830]: ERROR   16:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:00:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:00:01 compute-0 openstack_network_exporter[207830]: ERROR   16:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:00:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.184 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.304 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updating instance_info_cache with network_info: [{"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.312 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771948788.3111787, e86936dc-53ea-4101-81d9-99a20ee3b8fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.312 188707 INFO nova.compute.manager [-] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] VM Stopped (Lifecycle Event)
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.323 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.324 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.325 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.325 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.326 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.327 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.332 188707 DEBUG nova.compute.manager [None req-f7e5d3df-eac3-4d71-8a1e-1d69c7b046a8 - - - - - -] [instance: e86936dc-53ea-4101-81d9-99a20ee3b8fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.927 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:03 compute-0 nova_compute[188703]: 2026-02-24 16:00:03.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:00:04 compute-0 podman[248234]: 2026-02-24 16:00:04.178574113 +0000 UTC m=+0.122433465 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 16:00:04 compute-0 podman[248233]: 2026-02-24 16:00:04.183966691 +0000 UTC m=+0.128748748 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0)
Feb 24 16:00:05 compute-0 nova_compute[188703]: 2026-02-24 16:00:05.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:00:07 compute-0 nova_compute[188703]: 2026-02-24 16:00:07.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:00:07 compute-0 nova_compute[188703]: 2026-02-24 16:00:07.973 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:00:07 compute-0 nova_compute[188703]: 2026-02-24 16:00:07.974 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:00:07 compute-0 nova_compute[188703]: 2026-02-24 16:00:07.975 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:00:07 compute-0 nova_compute[188703]: 2026-02-24 16:00:07.975 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.100 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.185 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.188 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.206 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.273 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.275 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.362 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.365 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.445 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.459 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.532 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.534 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.595 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.597 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.677 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.679 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.757 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:00:08 compute-0 nova_compute[188703]: 2026-02-24 16:00:08.930 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.225 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.228 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4775MB free_disk=72.18993377685547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.228 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.229 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.327 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.328 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.329 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.329 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.411 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.431 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.475 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:00:09 compute-0 nova_compute[188703]: 2026-02-24 16:00:09.476 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:00:10 compute-0 podman[248297]: 2026-02-24 16:00:10.140433033 +0000 UTC m=+0.088392200 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., version=9.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, release=1770267347, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 24 16:00:11 compute-0 sshd-session[247630]: Received disconnect from 38.102.83.66 port 49274:11: disconnected by user
Feb 24 16:00:11 compute-0 sshd-session[247630]: Disconnected from user zuul 38.102.83.66 port 49274
Feb 24 16:00:11 compute-0 sshd-session[247627]: pam_unix(sshd:session): session closed for user zuul
Feb 24 16:00:11 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Feb 24 16:00:11 compute-0 systemd-logind[813]: Session 29 logged out. Waiting for processes to exit.
Feb 24 16:00:11 compute-0 systemd-logind[813]: Removed session 29.
Feb 24 16:00:13 compute-0 nova_compute[188703]: 2026-02-24 16:00:13.188 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:13 compute-0 nova_compute[188703]: 2026-02-24 16:00:13.932 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:15 compute-0 podman[248317]: 2026-02-24 16:00:15.197553419 +0000 UTC m=+0.143190485 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:00:15 compute-0 podman[248316]: 2026-02-24 16:00:15.204035267 +0000 UTC m=+0.148378187 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 16:00:18 compute-0 nova_compute[188703]: 2026-02-24 16:00:18.192 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:18 compute-0 nova_compute[188703]: 2026-02-24 16:00:18.935 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:22 compute-0 podman[248358]: 2026-02-24 16:00:22.17422556 +0000 UTC m=+0.120706727 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:00:22 compute-0 sshd-session[248382]: Accepted publickey for zuul from 38.102.83.66 port 59050 ssh2: RSA SHA256:NJTfdsSIVB6mH9/ClrbKw1e6GvsHFWYkptASszhoj5w
Feb 24 16:00:22 compute-0 systemd-logind[813]: New session 30 of user zuul.
Feb 24 16:00:22 compute-0 systemd[1]: Started Session 30 of User zuul.
Feb 24 16:00:22 compute-0 sshd-session[248382]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 16:00:23 compute-0 nova_compute[188703]: 2026-02-24 16:00:23.194 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:23 compute-0 sudo[248559]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpnmbptimwqivjnimyqxilpjwvkvqajg ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771948823.031015-59856-17213908326445/AnsiballZ_command.py'
Feb 24 16:00:23 compute-0 sudo[248559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 16:00:23 compute-0 python3[248562]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 16:00:23 compute-0 sudo[248559]: pam_unix(sudo:session): session closed for user root
Feb 24 16:00:23 compute-0 nova_compute[188703]: 2026-02-24 16:00:23.939 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:28 compute-0 nova_compute[188703]: 2026-02-24 16:00:28.196 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:28 compute-0 nova_compute[188703]: 2026-02-24 16:00:28.943 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:29 compute-0 podman[204685]: time="2026-02-24T16:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:00:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:00:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Feb 24 16:00:31 compute-0 openstack_network_exporter[207830]: ERROR   16:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:00:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:00:31 compute-0 openstack_network_exporter[207830]: ERROR   16:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:00:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:00:31 compute-0 sudo[248801]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thoxneaoyhibanczdbmecbscjvxyepzd ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771948831.2859943-60022-257147867624612/AnsiballZ_command.py'
Feb 24 16:00:32 compute-0 sudo[248801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 16:00:32 compute-0 podman[248750]: 2026-02-24 16:00:32.014384691 +0000 UTC m=+0.090077546 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:00:32 compute-0 podman[248751]: 2026-02-24 16:00:32.033345332 +0000 UTC m=+0.110400574 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 24 16:00:32 compute-0 python3[248819]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 16:00:32 compute-0 sudo[248801]: pam_unix(sudo:session): session closed for user root
Feb 24 16:00:33 compute-0 nova_compute[188703]: 2026-02-24 16:00:33.200 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:33 compute-0 nova_compute[188703]: 2026-02-24 16:00:33.946 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:35 compute-0 podman[248861]: 2026-02-24 16:00:35.155879767 +0000 UTC m=+0.109410406 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:00:35 compute-0 podman[248860]: 2026-02-24 16:00:35.186041237 +0000 UTC m=+0.139444283 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Feb 24 16:00:38 compute-0 nova_compute[188703]: 2026-02-24 16:00:38.204 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:38 compute-0 nova_compute[188703]: 2026-02-24 16:00:38.949 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:41 compute-0 podman[248897]: 2026-02-24 16:00:41.171251751 +0000 UTC m=+0.122990331 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 16:00:42 compute-0 sudo[249092]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnvkjjdwfyhxpxhqznvwasyiaarqokok ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771948841.786499-60182-179500777324818/AnsiballZ_command.py'
Feb 24 16:00:42 compute-0 sudo[249092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 16:00:42 compute-0 python3[249095]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 16:00:42 compute-0 sudo[249092]: pam_unix(sudo:session): session closed for user root
Feb 24 16:00:43 compute-0 nova_compute[188703]: 2026-02-24 16:00:43.206 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:43 compute-0 nova_compute[188703]: 2026-02-24 16:00:43.951 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:46 compute-0 podman[249134]: 2026-02-24 16:00:46.165860979 +0000 UTC m=+0.113372946 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:00:46 compute-0 podman[249135]: 2026-02-24 16:00:46.219662377 +0000 UTC m=+0.170143305 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:00:48 compute-0 nova_compute[188703]: 2026-02-24 16:00:48.210 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:48 compute-0 nova_compute[188703]: 2026-02-24 16:00:48.954 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:53 compute-0 podman[249182]: 2026-02-24 16:00:53.161601036 +0000 UTC m=+0.100007948 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:00:53 compute-0 nova_compute[188703]: 2026-02-24 16:00:53.213 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:53 compute-0 nova_compute[188703]: 2026-02-24 16:00:53.956 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:00:55.724 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:00:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:00:55.725 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:00:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:00:55.726 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:00:56 compute-0 nova_compute[188703]: 2026-02-24 16:00:56.476 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:00:57 compute-0 nova_compute[188703]: 2026-02-24 16:00:57.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:00:58 compute-0 nova_compute[188703]: 2026-02-24 16:00:58.217 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:58 compute-0 nova_compute[188703]: 2026-02-24 16:00:58.958 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:00:59 compute-0 sudo[249379]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqnevqqtefhctjuaqaapfkatwlaamzfu ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771948858.5151193-60404-172759820119982/AnsiballZ_command.py'
Feb 24 16:00:59 compute-0 sudo[249379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 16:00:59 compute-0 python3[249382]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 24 16:00:59 compute-0 sudo[249379]: pam_unix(sudo:session): session closed for user root
Feb 24 16:00:59 compute-0 podman[204685]: time="2026-02-24T16:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:00:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:00:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Feb 24 16:01:00 compute-0 nova_compute[188703]: 2026-02-24 16:01:00.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:01:00 compute-0 nova_compute[188703]: 2026-02-24 16:01:00.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:01:00 compute-0 nova_compute[188703]: 2026-02-24 16:01:00.946 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:01:01 compute-0 CROND[249422]: (root) CMD (run-parts /etc/cron.hourly)
Feb 24 16:01:01 compute-0 run-parts[249425]: (/etc/cron.hourly) starting 0anacron
Feb 24 16:01:01 compute-0 run-parts[249431]: (/etc/cron.hourly) finished 0anacron
Feb 24 16:01:01 compute-0 CROND[249421]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 24 16:01:01 compute-0 openstack_network_exporter[207830]: ERROR   16:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:01:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:01:01 compute-0 openstack_network_exporter[207830]: ERROR   16:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:01:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:01:02 compute-0 nova_compute[188703]: 2026-02-24 16:01:02.186 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:01:02 compute-0 nova_compute[188703]: 2026-02-24 16:01:02.187 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:01:02 compute-0 nova_compute[188703]: 2026-02-24 16:01:02.188 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:01:02 compute-0 nova_compute[188703]: 2026-02-24 16:01:02.189 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:01:03 compute-0 podman[249433]: 2026-02-24 16:01:03.14435916 +0000 UTC m=+0.103722321 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 24 16:01:03 compute-0 podman[249432]: 2026-02-24 16:01:03.154373144 +0000 UTC m=+0.114878796 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:01:03 compute-0 nova_compute[188703]: 2026-02-24 16:01:03.220 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:03 compute-0 nova_compute[188703]: 2026-02-24 16:01:03.961 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:05 compute-0 nova_compute[188703]: 2026-02-24 16:01:05.372 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:01:05 compute-0 nova_compute[188703]: 2026-02-24 16:01:05.394 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:01:05 compute-0 nova_compute[188703]: 2026-02-24 16:01:05.394 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:01:05 compute-0 nova_compute[188703]: 2026-02-24 16:01:05.395 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:01:05 compute-0 nova_compute[188703]: 2026-02-24 16:01:05.396 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:01:05 compute-0 nova_compute[188703]: 2026-02-24 16:01:05.396 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:01:05 compute-0 nova_compute[188703]: 2026-02-24 16:01:05.397 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:01:05 compute-0 nova_compute[188703]: 2026-02-24 16:01:05.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:01:05 compute-0 nova_compute[188703]: 2026-02-24 16:01:05.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:01:06 compute-0 podman[249475]: 2026-02-24 16:01:06.152527924 +0000 UTC m=+0.101290504 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, version=9.4, config_id=kepler, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:01:06 compute-0 podman[249476]: 2026-02-24 16:01:06.181683284 +0000 UTC m=+0.125742074 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:01:06 compute-0 nova_compute[188703]: 2026-02-24 16:01:06.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:01:08 compute-0 nova_compute[188703]: 2026-02-24 16:01:08.222 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:08 compute-0 nova_compute[188703]: 2026-02-24 16:01:08.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:01:08 compute-0 nova_compute[188703]: 2026-02-24 16:01:08.965 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:08 compute-0 nova_compute[188703]: 2026-02-24 16:01:08.976 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:01:08 compute-0 nova_compute[188703]: 2026-02-24 16:01:08.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:01:08 compute-0 nova_compute[188703]: 2026-02-24 16:01:08.978 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:01:08 compute-0 nova_compute[188703]: 2026-02-24 16:01:08.979 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.092 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.189 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.190 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.276 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.278 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.345 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.347 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.426 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.437 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.491 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.492 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.566 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.567 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.649 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.650 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:01:09 compute-0 nova_compute[188703]: 2026-02-24 16:01:09.700 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.068 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.069 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4790MB free_disk=72.18942260742188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.069 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.069 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.152 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.153 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.153 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.153 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.174 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.195 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.196 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.210 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.249 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.331 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.367 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.370 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:01:10 compute-0 nova_compute[188703]: 2026-02-24 16:01:10.371 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:01:12 compute-0 podman[249541]: 2026-02-24 16:01:12.122833882 +0000 UTC m=+0.075802953 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, config_id=openstack_network_exporter)
Feb 24 16:01:13 compute-0 nova_compute[188703]: 2026-02-24 16:01:13.225 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:13 compute-0 nova_compute[188703]: 2026-02-24 16:01:13.966 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:17 compute-0 podman[249562]: 2026-02-24 16:01:17.160531566 +0000 UTC m=+0.117816577 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute)
Feb 24 16:01:17 compute-0 podman[249563]: 2026-02-24 16:01:17.211403154 +0000 UTC m=+0.164858581 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:01:18 compute-0 nova_compute[188703]: 2026-02-24 16:01:18.228 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:18 compute-0 nova_compute[188703]: 2026-02-24 16:01:18.969 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:23 compute-0 nova_compute[188703]: 2026-02-24 16:01:23.231 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:23 compute-0 podman[249607]: 2026-02-24 16:01:23.385513655 +0000 UTC m=+0.127662749 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:01:23 compute-0 nova_compute[188703]: 2026-02-24 16:01:23.973 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:28 compute-0 nova_compute[188703]: 2026-02-24 16:01:28.235 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:28 compute-0 nova_compute[188703]: 2026-02-24 16:01:28.975 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:29 compute-0 podman[204685]: time="2026-02-24T16:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:01:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:01:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Feb 24 16:01:31 compute-0 openstack_network_exporter[207830]: ERROR   16:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:01:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:01:31 compute-0 openstack_network_exporter[207830]: ERROR   16:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:01:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:01:33 compute-0 nova_compute[188703]: 2026-02-24 16:01:33.241 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:33 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 24 16:01:33 compute-0 podman[249633]: 2026-02-24 16:01:33.618902296 +0000 UTC m=+0.129493015 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:01:33 compute-0 podman[249634]: 2026-02-24 16:01:33.621839768 +0000 UTC m=+0.121640368 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 24 16:01:33 compute-0 nova_compute[188703]: 2026-02-24 16:01:33.978 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:37 compute-0 podman[249672]: 2026-02-24 16:01:37.167864414 +0000 UTC m=+0.116691862 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, managed_by=edpm_ansible)
Feb 24 16:01:37 compute-0 podman[249673]: 2026-02-24 16:01:37.202217173 +0000 UTC m=+0.147869893 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, config_id=ceilometer_agent_ipmi, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 24 16:01:38 compute-0 nova_compute[188703]: 2026-02-24 16:01:38.242 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:38 compute-0 nova_compute[188703]: 2026-02-24 16:01:38.983 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.833 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.834 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea8c4d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.847 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354', 'name': 'vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {'metering.server_group': '105127c2-20fd-4471-8609-2ac19fea2fd2'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.853 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'name': 'test_0', 'flavor': {'id': '521ca388-0b2e-40c6-bb06-118d4ed86b49', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de6b8fc8-e0dc-4bbf-943b-e6ac6027af11'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4407f5b870e145d8917119ad928717e8', 'user_id': 'bd338d866e3242aeb685fec99c451955', 'hostId': '781ecc37a6b79190806723d16d00ebb39101dcca0f232fcf28344e62', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.854 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T16:01:39.854696) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.897 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.932 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.934 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.934 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.934 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.935 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T16:01:39.934741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.972 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.973 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:39.973 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.007 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.008 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.008 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.010 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T16:01:40.010844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.016 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.021 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.026 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T16:01:40.025746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.028 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.029 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.032 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.033 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T16:01:40.031522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.036 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.038 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.039 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.043 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.045 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.046 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T16:01:40.037659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T16:01:40.043325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.047 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.048 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.049 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.050 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.051 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T16:01:40.052185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.155 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.156 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.157 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.259 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.260 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.260 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.261 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.262 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.262 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.263 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.263 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.263 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 2224753847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.264 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 114510394 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.264 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.latency volume: 94768043 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.264 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 691853245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.265 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124156741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.265 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.latency volume: 124375245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.266 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T16:01:40.261950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T16:01:40.263692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T16:01:40.267410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.267 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/cpu volume: 39870000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.268 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/cpu volume: 44760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.269 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T16:01:40.269431) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.269 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.270 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.270 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.270 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.271 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.271 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.271 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.272 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.272 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.273 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.274 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T16:01:40.272406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T16:01:40.274407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.274 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.275 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.275 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.276 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.276 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.276 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.276 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.276 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.277 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.277 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.277 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.278 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.278 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T16:01:40.276549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.279 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.280 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 2487471190 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.280 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 10083548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.281 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.281 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 2170641399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.282 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 13738713 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.282 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.283 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.284 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.284 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.284 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.285 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.285 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T16:01:40.280025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.286 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.288 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.288 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T16:01:40.283873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.288 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.288 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.289 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.289 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.289 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.290 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.290 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.291 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.291 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.291 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.291 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.292 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T16:01:40.288402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.293 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.293 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.293 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.294 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T16:01:40.291378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.294 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.295 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.295 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T16:01:40.292984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.295 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T16:01:40.294664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T16:01:40.296266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.297 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.298 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.298 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T16:01:40.298438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.300 14 DEBUG ceilometer.compute.pollsters [-] 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.301 14 DEBUG ceilometer.compute.pollsters [-] fd83ae88-f3e1-49ef-8167-b8451d014cf7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T16:01:40.300596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:01:40.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:01:43 compute-0 podman[249711]: 2026-02-24 16:01:43.19196138 +0000 UTC m=+0.132712855 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 16:01:43 compute-0 nova_compute[188703]: 2026-02-24 16:01:43.244 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:43 compute-0 nova_compute[188703]: 2026-02-24 16:01:43.986 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:48 compute-0 podman[249732]: 2026-02-24 16:01:48.171987904 +0000 UTC m=+0.119716755 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Feb 24 16:01:48 compute-0 podman[249733]: 2026-02-24 16:01:48.231407204 +0000 UTC m=+0.178131177 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:01:48 compute-0 nova_compute[188703]: 2026-02-24 16:01:48.247 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:48 compute-0 nova_compute[188703]: 2026-02-24 16:01:48.989 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:53 compute-0 nova_compute[188703]: 2026-02-24 16:01:53.250 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:53 compute-0 nova_compute[188703]: 2026-02-24 16:01:53.991 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:54 compute-0 podman[249780]: 2026-02-24 16:01:54.134441178 +0000 UTC m=+0.088789722 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:01:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:01:55.726 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:01:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:01:55.727 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:01:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:01:55.728 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:01:56 compute-0 nova_compute[188703]: 2026-02-24 16:01:56.372 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:01:58 compute-0 nova_compute[188703]: 2026-02-24 16:01:58.254 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:58 compute-0 nova_compute[188703]: 2026-02-24 16:01:58.993 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:01:59 compute-0 sshd-session[248385]: Received disconnect from 38.102.83.66 port 59050:11: disconnected by user
Feb 24 16:01:59 compute-0 sshd-session[248385]: Disconnected from user zuul 38.102.83.66 port 59050
Feb 24 16:01:59 compute-0 sshd-session[248382]: pam_unix(sshd:session): session closed for user zuul
Feb 24 16:01:59 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Feb 24 16:01:59 compute-0 systemd[1]: session-30.scope: Consumed 4.184s CPU time.
Feb 24 16:01:59 compute-0 systemd-logind[813]: Session 30 logged out. Waiting for processes to exit.
Feb 24 16:01:59 compute-0 systemd-logind[813]: Removed session 30.
Feb 24 16:01:59 compute-0 podman[204685]: time="2026-02-24T16:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:01:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:01:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4391 "" "Go-http-client/1.1"
Feb 24 16:01:59 compute-0 nova_compute[188703]: 2026-02-24 16:01:59.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:01 compute-0 openstack_network_exporter[207830]: ERROR   16:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:02:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:02:01 compute-0 openstack_network_exporter[207830]: ERROR   16:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:02:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:02:01 compute-0 nova_compute[188703]: 2026-02-24 16:02:01.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:02 compute-0 nova_compute[188703]: 2026-02-24 16:02:02.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:02 compute-0 nova_compute[188703]: 2026-02-24 16:02:02.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:02:03 compute-0 nova_compute[188703]: 2026-02-24 16:02:03.256 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:03 compute-0 nova_compute[188703]: 2026-02-24 16:02:03.996 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:04 compute-0 podman[249803]: 2026-02-24 16:02:04.143338579 +0000 UTC m=+0.087585678 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:02:04 compute-0 podman[249804]: 2026-02-24 16:02:04.186369317 +0000 UTC m=+0.126097972 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:02:04 compute-0 nova_compute[188703]: 2026-02-24 16:02:04.195 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:02:04 compute-0 nova_compute[188703]: 2026-02-24 16:02:04.195 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:02:04 compute-0 nova_compute[188703]: 2026-02-24 16:02:04.195 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:02:06 compute-0 nova_compute[188703]: 2026-02-24 16:02:06.203 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updating instance_info_cache with network_info: [{"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:02:06 compute-0 nova_compute[188703]: 2026-02-24 16:02:06.225 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:02:06 compute-0 nova_compute[188703]: 2026-02-24 16:02:06.225 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:02:06 compute-0 nova_compute[188703]: 2026-02-24 16:02:06.226 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:06 compute-0 nova_compute[188703]: 2026-02-24 16:02:06.227 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:06 compute-0 nova_compute[188703]: 2026-02-24 16:02:06.228 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:06 compute-0 nova_compute[188703]: 2026-02-24 16:02:06.228 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:02:07 compute-0 nova_compute[188703]: 2026-02-24 16:02:07.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:07 compute-0 nova_compute[188703]: 2026-02-24 16:02:07.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:07 compute-0 nova_compute[188703]: 2026-02-24 16:02:07.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 16:02:07 compute-0 nova_compute[188703]: 2026-02-24 16:02:07.962 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:08 compute-0 podman[249846]: 2026-02-24 16:02:08.12855323 +0000 UTC m=+0.087701182 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 24 16:02:08 compute-0 podman[249845]: 2026-02-24 16:02:08.136257732 +0000 UTC m=+0.097122142 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, config_id=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, distribution-scope=public, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:02:08 compute-0 nova_compute[188703]: 2026-02-24 16:02:08.259 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:08 compute-0 nova_compute[188703]: 2026-02-24 16:02:08.998 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:09 compute-0 nova_compute[188703]: 2026-02-24 16:02:09.972 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.011 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.012 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.013 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.014 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.124 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.212 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.213 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.291 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.293 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.381 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.383 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.467 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.479 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.569 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.571 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.635 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.636 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.681 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.682 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:02:10 compute-0 nova_compute[188703]: 2026-02-24 16:02:10.732 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.206 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.208 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4808MB free_disk=72.18942260742188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.209 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.209 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.419 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.420 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.420 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.420 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.645 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.658 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.659 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.659 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.659 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.660 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 16:02:11 compute-0 nova_compute[188703]: 2026-02-24 16:02:11.679 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 16:02:13 compute-0 nova_compute[188703]: 2026-02-24 16:02:13.262 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:14 compute-0 nova_compute[188703]: 2026-02-24 16:02:14.001 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:14 compute-0 podman[249906]: 2026-02-24 16:02:14.134327612 +0000 UTC m=+0.074179299 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., version=9.7, managed_by=edpm_ansible, release=1770267347, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git)
Feb 24 16:02:18 compute-0 nova_compute[188703]: 2026-02-24 16:02:18.265 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:19 compute-0 nova_compute[188703]: 2026-02-24 16:02:19.004 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:19 compute-0 podman[249927]: 2026-02-24 16:02:19.178666643 +0000 UTC m=+0.120445945 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 16:02:19 compute-0 podman[249928]: 2026-02-24 16:02:19.220989761 +0000 UTC m=+0.159173825 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 24 16:02:23 compute-0 nova_compute[188703]: 2026-02-24 16:02:23.268 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:24 compute-0 nova_compute[188703]: 2026-02-24 16:02:24.008 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:25 compute-0 podman[249973]: 2026-02-24 16:02:25.172052578 +0000 UTC m=+0.127885840 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:02:28 compute-0 nova_compute[188703]: 2026-02-24 16:02:28.270 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:29 compute-0 nova_compute[188703]: 2026-02-24 16:02:29.011 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:29 compute-0 podman[204685]: time="2026-02-24T16:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:02:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:02:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Feb 24 16:02:31 compute-0 openstack_network_exporter[207830]: ERROR   16:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:02:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:02:31 compute-0 openstack_network_exporter[207830]: ERROR   16:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:02:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:02:33 compute-0 nova_compute[188703]: 2026-02-24 16:02:33.273 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:34 compute-0 nova_compute[188703]: 2026-02-24 16:02:34.013 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:35 compute-0 podman[249996]: 2026-02-24 16:02:35.176402699 +0000 UTC m=+0.128780745 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:02:35 compute-0 podman[249997]: 2026-02-24 16:02:35.195866526 +0000 UTC m=+0.141261350 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 24 16:02:38 compute-0 nova_compute[188703]: 2026-02-24 16:02:38.275 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:39 compute-0 nova_compute[188703]: 2026-02-24 16:02:39.015 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:39 compute-0 podman[250038]: 2026-02-24 16:02:39.171061908 +0000 UTC m=+0.123192111 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, distribution-scope=public, version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler)
Feb 24 16:02:39 compute-0 podman[250039]: 2026-02-24 16:02:39.190303989 +0000 UTC m=+0.139807580 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 24 16:02:43 compute-0 nova_compute[188703]: 2026-02-24 16:02:43.278 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:44 compute-0 nova_compute[188703]: 2026-02-24 16:02:44.018 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:44 compute-0 podman[250074]: 2026-02-24 16:02:44.734386494 +0000 UTC m=+0.072461571 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, release=1770267347, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, distribution-scope=public, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 24 16:02:48 compute-0 nova_compute[188703]: 2026-02-24 16:02:48.281 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:49 compute-0 nova_compute[188703]: 2026-02-24 16:02:49.021 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:50 compute-0 podman[250096]: 2026-02-24 16:02:50.134688923 +0000 UTC m=+0.084254396 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Feb 24 16:02:50 compute-0 podman[250097]: 2026-02-24 16:02:50.177188606 +0000 UTC m=+0.118694408 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223)
Feb 24 16:02:53 compute-0 nova_compute[188703]: 2026-02-24 16:02:53.283 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:54 compute-0 nova_compute[188703]: 2026-02-24 16:02:54.024 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:02:55.727 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:02:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:02:55.728 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:02:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:02:55.729 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:02:56 compute-0 podman[250138]: 2026-02-24 16:02:56.16970882 +0000 UTC m=+0.123162511 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:02:57 compute-0 nova_compute[188703]: 2026-02-24 16:02:57.648 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:02:58 compute-0 nova_compute[188703]: 2026-02-24 16:02:58.286 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:59 compute-0 nova_compute[188703]: 2026-02-24 16:02:59.025 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:02:59 compute-0 podman[204685]: time="2026-02-24T16:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:02:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:02:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Feb 24 16:03:00 compute-0 nova_compute[188703]: 2026-02-24 16:03:00.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:01 compute-0 openstack_network_exporter[207830]: ERROR   16:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:03:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:03:01 compute-0 openstack_network_exporter[207830]: ERROR   16:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:03:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.581 188707 DEBUG oslo_concurrency.lockutils [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.583 188707 DEBUG oslo_concurrency.lockutils [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.584 188707 DEBUG oslo_concurrency.lockutils [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.585 188707 DEBUG oslo_concurrency.lockutils [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.587 188707 DEBUG oslo_concurrency.lockutils [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.590 188707 INFO nova.compute.manager [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Terminating instance
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.594 188707 DEBUG nova.compute.manager [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:03:01 compute-0 kernel: tap34a110b8-bd (unregistering): left promiscuous mode
Feb 24 16:03:01 compute-0 NetworkManager[56995]: <info>  [1771948981.6575] device (tap34a110b8-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.671 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:01 compute-0 ovn_controller[98701]: 2026-02-24T16:03:01Z|00058|binding|INFO|Releasing lport 34a110b8-bd03-4b38-8f53-7380a2e1fc82 from this chassis (sb_readonly=0)
Feb 24 16:03:01 compute-0 ovn_controller[98701]: 2026-02-24T16:03:01Z|00059|binding|INFO|Setting lport 34a110b8-bd03-4b38-8f53-7380a2e1fc82 down in Southbound
Feb 24 16:03:01 compute-0 ovn_controller[98701]: 2026-02-24T16:03:01Z|00060|binding|INFO|Removing iface tap34a110b8-bd ovn-installed in OVS
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.676 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.681 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:29:21 192.168.0.42'], port_security=['fa:16:3e:57:29:21 192.168.0.42'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-ifzd7ux27mgz-k2kyvgwezk52-kclaz4xd52sx-port-bw4jnbdw5py4', 'neutron:cidrs': '192.168.0.42/24', 'neutron:device_id': '2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-863f062e-1672-4c9a-8889-3b2ee95f838a', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-ifzd7ux27mgz-k2kyvgwezk52-kclaz4xd52sx-port-bw4jnbdw5py4', 'neutron:project_id': '4407f5b870e145d8917119ad928717e8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9038fe38-7d22-46f5-bd37-0cab71bf22d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231de057-8460-4792-a8ff-f638ed53c1a8, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=34a110b8-bd03-4b38-8f53-7380a2e1fc82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.683 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.686 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 34a110b8-bd03-4b38-8f53-7380a2e1fc82 in datapath 863f062e-1672-4c9a-8889-3b2ee95f838a unbound from our chassis
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.688 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 863f062e-1672-4c9a-8889-3b2ee95f838a
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.702 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec386c3-6b0f-4d1c-b2b4-4806cce24c38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:01 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Feb 24 16:03:01 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 57.099s CPU time.
Feb 24 16:03:01 compute-0 systemd-machined[158049]: Machine qemu-4-instance-00000004 terminated.
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.731 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[818e7f40-d6ff-41cb-bbc9-d60bfd188de4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.736 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8a9854-98af-45e5-b7d8-37df7e1753aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.768 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[76cd83c9-20d6-4c7b-b399-e8f681e774a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.787 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[342b2c41-4a4e-463b-a51c-8872edfda99a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap863f062e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:6f:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365300, 'reachable_time': 20791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250175, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.803 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f232ff-ade1-4146-8ea6-6544db05d763]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365310, 'tstamp': 365310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250176, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap863f062e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365313, 'tstamp': 365313}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250176, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.806 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap863f062e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.809 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.815 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.815 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap863f062e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.816 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.817 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap863f062e-10, col_values=(('external_ids', {'iface-id': 'e7d10e1c-8dfe-4042-832a-f76958f5496a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:03:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:01.817 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.884 188707 INFO nova.virt.libvirt.driver [-] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Instance destroyed successfully.
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.885 188707 DEBUG nova.objects.instance [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'resources' on Instance uuid 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.900 188707 DEBUG nova.virt.libvirt.vif [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T15:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ux27mgz-k2kyvgwezk52-kclaz4xd52sx-vnf-eq5ytar2dclh',id=4,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-24T15:52:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='105127c2-20fd-4471-8609-2ac19fea2fd2'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-c6i55v07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T15:52:50Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI4NDY4NDA3NjIzMzEyMDQ3Nzg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjg0Njg0MDc2MjMzMTIwNDc3OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI4NDY4NDA3NjIzMzEyMDQ3Nzg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Feb 24 16:03:01 compute-0 nova_compute[188703]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjg0Njg0MDc2MjMzMTIwNDc3OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI4NDY4NDA3NjIzMzEyMDQ3Nzg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yODQ2ODQwNzYyMzMxMjA0Nzc4PT0tLQo=',user_id='bd338d866e3242aeb685fec99c451955',uuid=2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.900 188707 DEBUG nova.network.os_vif_util [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.901 188707 DEBUG nova.network.os_vif_util [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:29:21,bridge_name='br-int',has_traffic_filtering=True,id=34a110b8-bd03-4b38-8f53-7380a2e1fc82,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap34a110b8-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.902 188707 DEBUG os_vif [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:29:21,bridge_name='br-int',has_traffic_filtering=True,id=34a110b8-bd03-4b38-8f53-7380a2e1fc82,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap34a110b8-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.904 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.904 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34a110b8-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.907 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.909 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.914 188707 INFO os_vif [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:29:21,bridge_name='br-int',has_traffic_filtering=True,id=34a110b8-bd03-4b38-8f53-7380a2e1fc82,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap34a110b8-bd')
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.915 188707 INFO nova.virt.libvirt.driver [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Deleting instance files /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354_del
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.916 188707 INFO nova.virt.libvirt.driver [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Deletion of /var/lib/nova/instances/2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354_del complete
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.992 188707 INFO nova.compute.manager [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Took 0.40 seconds to destroy the instance on the hypervisor.
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.994 188707 DEBUG oslo.service.loopingcall [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.994 188707 DEBUG nova.compute.manager [-] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:03:01 compute-0 nova_compute[188703]: 2026-02-24 16:03:01.995 188707 DEBUG nova.network.neutron [-] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:03:02 compute-0 rsyslogd[239437]: message too long (8192) with configured size 8096, begin of message is: 2026-02-24 16:03:01.900 188707 DEBUG nova.virt.libvirt.vif [None req-f0b7fcb8-1b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.493 188707 DEBUG nova.compute.manager [req-a3116018-4e4d-4346-a6f3-049fa9b435a7 req-0571f1a7-c9fe-43fe-b398-94b5b99bc55e 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Received event network-vif-unplugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.493 188707 DEBUG oslo_concurrency.lockutils [req-a3116018-4e4d-4346-a6f3-049fa9b435a7 req-0571f1a7-c9fe-43fe-b398-94b5b99bc55e 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.494 188707 DEBUG oslo_concurrency.lockutils [req-a3116018-4e4d-4346-a6f3-049fa9b435a7 req-0571f1a7-c9fe-43fe-b398-94b5b99bc55e 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.494 188707 DEBUG oslo_concurrency.lockutils [req-a3116018-4e4d-4346-a6f3-049fa9b435a7 req-0571f1a7-c9fe-43fe-b398-94b5b99bc55e 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.494 188707 DEBUG nova.compute.manager [req-a3116018-4e4d-4346-a6f3-049fa9b435a7 req-0571f1a7-c9fe-43fe-b398-94b5b99bc55e 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] No waiting events found dispatching network-vif-unplugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.494 188707 DEBUG nova.compute.manager [req-a3116018-4e4d-4346-a6f3-049fa9b435a7 req-0571f1a7-c9fe-43fe-b398-94b5b99bc55e 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Received event network-vif-unplugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:03:02 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:02.672 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.673 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:02 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:02.674 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.729 188707 DEBUG nova.compute.manager [req-af9af650-7c2c-46b0-9ce6-492d1a5a8dc0 req-89d6113f-a43b-4821-a59b-dd7f06b7969c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Received event network-changed-34a110b8-bd03-4b38-8f53-7380a2e1fc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.730 188707 DEBUG nova.compute.manager [req-af9af650-7c2c-46b0-9ce6-492d1a5a8dc0 req-89d6113f-a43b-4821-a59b-dd7f06b7969c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Refreshing instance network info cache due to event network-changed-34a110b8-bd03-4b38-8f53-7380a2e1fc82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.731 188707 DEBUG oslo_concurrency.lockutils [req-af9af650-7c2c-46b0-9ce6-492d1a5a8dc0 req-89d6113f-a43b-4821-a59b-dd7f06b7969c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.731 188707 DEBUG oslo_concurrency.lockutils [req-af9af650-7c2c-46b0-9ce6-492d1a5a8dc0 req-89d6113f-a43b-4821-a59b-dd7f06b7969c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.732 188707 DEBUG nova.network.neutron [req-af9af650-7c2c-46b0-9ce6-492d1a5a8dc0 req-89d6113f-a43b-4821-a59b-dd7f06b7969c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Refreshing network info cache for port 34a110b8-bd03-4b38-8f53-7380a2e1fc82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:03:02 compute-0 nova_compute[188703]: 2026-02-24 16:03:02.963 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 24 16:03:03 compute-0 nova_compute[188703]: 2026-02-24 16:03:03.251 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:03:03 compute-0 nova_compute[188703]: 2026-02-24 16:03:03.252 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:03:03 compute-0 nova_compute[188703]: 2026-02-24 16:03:03.252 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:03:03 compute-0 nova_compute[188703]: 2026-02-24 16:03:03.252 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:03:03 compute-0 nova_compute[188703]: 2026-02-24 16:03:03.290 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.356 188707 DEBUG nova.network.neutron [-] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.377 188707 INFO nova.compute.manager [-] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Took 2.38 seconds to deallocate network for instance.
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.429 188707 DEBUG oslo_concurrency.lockutils [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.431 188707 DEBUG oslo_concurrency.lockutils [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.524 188707 DEBUG nova.compute.provider_tree [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.541 188707 DEBUG nova.scheduler.client.report [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.567 188707 DEBUG oslo_concurrency.lockutils [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.608 188707 DEBUG nova.compute.manager [req-1e4f9447-ac95-4925-afa4-75f562512f6f req-60b874b1-5f8b-48e7-b24b-9ad85847ebb4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Received event network-vif-plugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.610 188707 DEBUG oslo_concurrency.lockutils [req-1e4f9447-ac95-4925-afa4-75f562512f6f req-60b874b1-5f8b-48e7-b24b-9ad85847ebb4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.611 188707 DEBUG oslo_concurrency.lockutils [req-1e4f9447-ac95-4925-afa4-75f562512f6f req-60b874b1-5f8b-48e7-b24b-9ad85847ebb4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.611 188707 DEBUG oslo_concurrency.lockutils [req-1e4f9447-ac95-4925-afa4-75f562512f6f req-60b874b1-5f8b-48e7-b24b-9ad85847ebb4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.612 188707 DEBUG nova.compute.manager [req-1e4f9447-ac95-4925-afa4-75f562512f6f req-60b874b1-5f8b-48e7-b24b-9ad85847ebb4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] No waiting events found dispatching network-vif-plugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.612 188707 WARNING nova.compute.manager [req-1e4f9447-ac95-4925-afa4-75f562512f6f req-60b874b1-5f8b-48e7-b24b-9ad85847ebb4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Received unexpected event network-vif-plugged-34a110b8-bd03-4b38-8f53-7380a2e1fc82 for instance with vm_state deleted and task_state None.
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.614 188707 INFO nova.scheduler.client.report [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Deleted allocations for instance 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.677 188707 DEBUG oslo_concurrency.lockutils [None req-f0b7fcb8-1bdf-414e-8fac-dd6586ec3ea3 bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.989 188707 DEBUG nova.network.neutron [req-af9af650-7c2c-46b0-9ce6-492d1a5a8dc0 req-89d6113f-a43b-4821-a59b-dd7f06b7969c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updated VIF entry in instance network info cache for port 34a110b8-bd03-4b38-8f53-7380a2e1fc82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:03:04 compute-0 nova_compute[188703]: 2026-02-24 16:03:04.990 188707 DEBUG nova.network.neutron [req-af9af650-7c2c-46b0-9ce6-492d1a5a8dc0 req-89d6113f-a43b-4821-a59b-dd7f06b7969c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Updating instance_info_cache with network_info: [{"id": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "address": "fa:16:3e:57:29:21", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a110b8-bd", "ovs_interfaceid": "34a110b8-bd03-4b38-8f53-7380a2e1fc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:03:05 compute-0 nova_compute[188703]: 2026-02-24 16:03:05.006 188707 DEBUG oslo_concurrency.lockutils [req-af9af650-7c2c-46b0-9ce6-492d1a5a8dc0 req-89d6113f-a43b-4821-a59b-dd7f06b7969c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:03:05 compute-0 nova_compute[188703]: 2026-02-24 16:03:05.267 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [{"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:03:05 compute-0 nova_compute[188703]: 2026-02-24 16:03:05.288 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fd83ae88-f3e1-49ef-8167-b8451d014cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:03:05 compute-0 nova_compute[188703]: 2026-02-24 16:03:05.289 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:03:05 compute-0 nova_compute[188703]: 2026-02-24 16:03:05.290 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:05 compute-0 nova_compute[188703]: 2026-02-24 16:03:05.291 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:05 compute-0 nova_compute[188703]: 2026-02-24 16:03:05.292 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:05 compute-0 nova_compute[188703]: 2026-02-24 16:03:05.293 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:03:05 compute-0 nova_compute[188703]: 2026-02-24 16:03:05.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:06 compute-0 podman[250200]: 2026-02-24 16:03:06.149448667 +0000 UTC m=+0.082607262 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:03:06 compute-0 podman[250199]: 2026-02-24 16:03:06.172592415 +0000 UTC m=+0.109690628 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:03:06 compute-0 nova_compute[188703]: 2026-02-24 16:03:06.907 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:06 compute-0 nova_compute[188703]: 2026-02-24 16:03:06.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:07 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:07.678 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:03:07 compute-0 nova_compute[188703]: 2026-02-24 16:03:07.967 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:08 compute-0 nova_compute[188703]: 2026-02-24 16:03:08.293 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:10 compute-0 podman[250241]: 2026-02-24 16:03:10.134616835 +0000 UTC m=+0.084849214 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, config_id=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64)
Feb 24 16:03:10 compute-0 podman[250242]: 2026-02-24 16:03:10.155726047 +0000 UTC m=+0.095911778 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:03:10 compute-0 nova_compute[188703]: 2026-02-24 16:03:10.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:10 compute-0 nova_compute[188703]: 2026-02-24 16:03:10.969 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:10 compute-0 nova_compute[188703]: 2026-02-24 16:03:10.969 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:10 compute-0 nova_compute[188703]: 2026-02-24 16:03:10.970 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:10 compute-0 nova_compute[188703]: 2026-02-24 16:03:10.971 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.065 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.155 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.157 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.210 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.224 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.280 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.281 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.357 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.829 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.830 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5081MB free_disk=72.21241760253906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.831 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.831 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.894 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fd83ae88-f3e1-49ef-8167-b8451d014cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.895 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.896 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.911 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.951 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.969 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.997 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:03:11 compute-0 nova_compute[188703]: 2026-02-24 16:03:11.998 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:13 compute-0 nova_compute[188703]: 2026-02-24 16:03:13.295 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:15 compute-0 podman[250293]: 2026-02-24 16:03:15.173194739 +0000 UTC m=+0.124902929 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, release=1770267347, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z)
Feb 24 16:03:16 compute-0 nova_compute[188703]: 2026-02-24 16:03:16.881 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771948981.8793623, 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:03:16 compute-0 nova_compute[188703]: 2026-02-24 16:03:16.882 188707 INFO nova.compute.manager [-] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] VM Stopped (Lifecycle Event)
Feb 24 16:03:16 compute-0 nova_compute[188703]: 2026-02-24 16:03:16.909 188707 DEBUG nova.compute.manager [None req-324a96b2-bb45-42d6-ac89-922e9cf68345 - - - - - -] [instance: 2cb64c5b-c1a1-4bc4-aa11-7bf718e3c354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:03:16 compute-0 nova_compute[188703]: 2026-02-24 16:03:16.914 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:18 compute-0 nova_compute[188703]: 2026-02-24 16:03:18.298 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:20 compute-0 sshd-session[250315]: Connection closed by authenticating user root 52.159.244.83 port 2072 [preauth]
Feb 24 16:03:21 compute-0 podman[250317]: 2026-02-24 16:03:21.183857313 +0000 UTC m=+0.135383058 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Feb 24 16:03:21 compute-0 podman[250318]: 2026-02-24 16:03:21.226415677 +0000 UTC m=+0.176555614 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.324 188707 DEBUG oslo_concurrency.lockutils [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.325 188707 DEBUG oslo_concurrency.lockutils [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.326 188707 DEBUG oslo_concurrency.lockutils [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.327 188707 DEBUG oslo_concurrency.lockutils [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.327 188707 DEBUG oslo_concurrency.lockutils [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.330 188707 INFO nova.compute.manager [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Terminating instance
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.332 188707 DEBUG nova.compute.manager [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:03:21 compute-0 kernel: tap4fe2ff99-5b (unregistering): left promiscuous mode
Feb 24 16:03:21 compute-0 NetworkManager[56995]: <info>  [1771949001.3843] device (tap4fe2ff99-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:03:21 compute-0 ovn_controller[98701]: 2026-02-24T16:03:21Z|00061|binding|INFO|Releasing lport 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 from this chassis (sb_readonly=0)
Feb 24 16:03:21 compute-0 ovn_controller[98701]: 2026-02-24T16:03:21Z|00062|binding|INFO|Setting lport 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 down in Southbound
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.394 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 ovn_controller[98701]: 2026-02-24T16:03:21Z|00063|binding|INFO|Removing iface tap4fe2ff99-5b ovn-installed in OVS
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.401 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.411 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.411 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:4c:f6 192.168.0.39'], port_security=['fa:16:3e:1e:4c:f6 192.168.0.39'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.39/24', 'neutron:device_id': 'fd83ae88-f3e1-49ef-8167-b8451d014cf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-863f062e-1672-4c9a-8889-3b2ee95f838a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4407f5b870e145d8917119ad928717e8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9038fe38-7d22-46f5-bd37-0cab71bf22d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231de057-8460-4792-a8ff-f638ed53c1a8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=4fe2ff99-5ba5-49b4-a275-e8c5c9b51888) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.414 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 in datapath 863f062e-1672-4c9a-8889-3b2ee95f838a unbound from our chassis
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.416 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 863f062e-1672-4c9a-8889-3b2ee95f838a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.418 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[833e58d4-901f-4714-a222-807b392f86fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.419 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a namespace which is not needed anymore
Feb 24 16:03:21 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb 24 16:03:21 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 1.423s CPU time.
Feb 24 16:03:21 compute-0 systemd-machined[158049]: Machine qemu-1-instance-00000001 terminated.
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.561 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.570 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a[242258]: [NOTICE]   (242262) : haproxy version is 2.8.14-c23fe91
Feb 24 16:03:21 compute-0 neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a[242258]: [NOTICE]   (242262) : path to executable is /usr/sbin/haproxy
Feb 24 16:03:21 compute-0 neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a[242258]: [WARNING]  (242262) : Exiting Master process...
Feb 24 16:03:21 compute-0 neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a[242258]: [ALERT]    (242262) : Current worker (242264) exited with code 143 (Terminated)
Feb 24 16:03:21 compute-0 neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a[242258]: [WARNING]  (242262) : All workers exited. Exiting... (0)
Feb 24 16:03:21 compute-0 systemd[1]: libpod-324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d.scope: Deactivated successfully.
Feb 24 16:03:21 compute-0 podman[250386]: 2026-02-24 16:03:21.617952055 +0000 UTC m=+0.075897717 container died 324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.619 188707 INFO nova.virt.libvirt.driver [-] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Instance destroyed successfully.
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.620 188707 DEBUG nova.objects.instance [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lazy-loading 'resources' on Instance uuid fd83ae88-f3e1-49ef-8167-b8451d014cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.632 188707 DEBUG nova.virt.libvirt.vif [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T15:45:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-24T15:45:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4407f5b870e145d8917119ad928717e8',ramdisk_id='',reservation_id='r-tehi0e8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de6b8fc8-e0dc-4bbf-943b-e6ac6027af11',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T15:45:37Z,user_data=None,user_id='bd338d866e3242aeb685fec99c451955',uuid=fd83ae88-f3e1-49ef-8167-b8451d014cf7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.632 188707 DEBUG nova.network.os_vif_util [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converting VIF {"id": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "address": "fa:16:3e:1e:4c:f6", "network": {"id": "863f062e-1672-4c9a-8889-3b2ee95f838a", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4407f5b870e145d8917119ad928717e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe2ff99-5b", "ovs_interfaceid": "4fe2ff99-5ba5-49b4-a275-e8c5c9b51888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.633 188707 DEBUG nova.network.os_vif_util [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:4c:f6,bridge_name='br-int',has_traffic_filtering=True,id=4fe2ff99-5ba5-49b4-a275-e8c5c9b51888,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe2ff99-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.634 188707 DEBUG os_vif [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:4c:f6,bridge_name='br-int',has_traffic_filtering=True,id=4fe2ff99-5ba5-49b4-a275-e8c5c9b51888,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe2ff99-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.635 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.635 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fe2ff99-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.637 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.639 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.641 188707 INFO os_vif [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:4c:f6,bridge_name='br-int',has_traffic_filtering=True,id=4fe2ff99-5ba5-49b4-a275-e8c5c9b51888,network=Network(863f062e-1672-4c9a-8889-3b2ee95f838a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe2ff99-5b')
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.642 188707 INFO nova.virt.libvirt.driver [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Deleting instance files /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7_del
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.642 188707 INFO nova.virt.libvirt.driver [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Deletion of /var/lib/nova/instances/fd83ae88-f3e1-49ef-8167-b8451d014cf7_del complete
Feb 24 16:03:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d-userdata-shm.mount: Deactivated successfully.
Feb 24 16:03:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9251c3eaa094565743e690daf4a529b9b6c4f91a44e371326b9f997792597b5-merged.mount: Deactivated successfully.
Feb 24 16:03:21 compute-0 podman[250386]: 2026-02-24 16:03:21.680346046 +0000 UTC m=+0.138291748 container cleanup 324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 24 16:03:21 compute-0 systemd[1]: libpod-conmon-324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d.scope: Deactivated successfully.
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.697 188707 INFO nova.compute.manager [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Took 0.36 seconds to destroy the instance on the hypervisor.
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.698 188707 DEBUG oslo.service.loopingcall [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.699 188707 DEBUG nova.compute.manager [-] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.699 188707 DEBUG nova.network.neutron [-] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:03:21 compute-0 podman[250433]: 2026-02-24 16:03:21.765890587 +0000 UTC m=+0.058134785 container remove 324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.771 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee730cd-73d1-45e8-b4ab-dbb9b6318bcd]: (4, ('Tue Feb 24 04:03:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a (324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d)\n324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d\nTue Feb 24 04:03:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a (324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d)\n324324cf46f93442dfcab56e9d13eaec3db5d83c964caf9f23f78e879840c75d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.773 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[d32c2188-ea58-4a67-bb1e-5d2b02df8027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.774 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap863f062e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.776 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 kernel: tap863f062e-10: left promiscuous mode
Feb 24 16:03:21 compute-0 nova_compute[188703]: 2026-02-24 16:03:21.783 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.787 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[1c30dc2d-d318-41ff-8de0-356f45cc6d6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.807 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[f281163b-fd21-483d-9d8b-5053ad1a71f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.809 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[6507b292-ab1f-4816-bc00-a2c63e8e1b31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.823 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[c36873ce-f6ae-444c-a7a3-5c1c204fcf32]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365290, 'reachable_time': 35881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250448, 'error': None, 'target': 'ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d863f062e\x2d1672\x2d4c9a\x2d8889\x2d3b2ee95f838a.mount: Deactivated successfully.
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.832 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-863f062e-1672-4c9a-8889-3b2ee95f838a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:03:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:21.833 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[fb65100a-2c74-463e-93ec-43629f548039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:03:22 compute-0 nova_compute[188703]: 2026-02-24 16:03:22.496 188707 DEBUG nova.compute.manager [req-a28891da-d319-4277-947b-907ba2cc1507 req-46b046b8-a48b-49ac-bc3f-df253123a516 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received event network-vif-unplugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:03:22 compute-0 nova_compute[188703]: 2026-02-24 16:03:22.497 188707 DEBUG oslo_concurrency.lockutils [req-a28891da-d319-4277-947b-907ba2cc1507 req-46b046b8-a48b-49ac-bc3f-df253123a516 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:22 compute-0 nova_compute[188703]: 2026-02-24 16:03:22.498 188707 DEBUG oslo_concurrency.lockutils [req-a28891da-d319-4277-947b-907ba2cc1507 req-46b046b8-a48b-49ac-bc3f-df253123a516 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:22 compute-0 nova_compute[188703]: 2026-02-24 16:03:22.499 188707 DEBUG oslo_concurrency.lockutils [req-a28891da-d319-4277-947b-907ba2cc1507 req-46b046b8-a48b-49ac-bc3f-df253123a516 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:22 compute-0 nova_compute[188703]: 2026-02-24 16:03:22.500 188707 DEBUG nova.compute.manager [req-a28891da-d319-4277-947b-907ba2cc1507 req-46b046b8-a48b-49ac-bc3f-df253123a516 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] No waiting events found dispatching network-vif-unplugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:03:22 compute-0 nova_compute[188703]: 2026-02-24 16:03:22.500 188707 DEBUG nova.compute.manager [req-a28891da-d319-4277-947b-907ba2cc1507 req-46b046b8-a48b-49ac-bc3f-df253123a516 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received event network-vif-unplugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:03:23 compute-0 nova_compute[188703]: 2026-02-24 16:03:23.301 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:23 compute-0 nova_compute[188703]: 2026-02-24 16:03:23.805 188707 DEBUG nova.network.neutron [-] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:03:23 compute-0 nova_compute[188703]: 2026-02-24 16:03:23.835 188707 INFO nova.compute.manager [-] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Took 2.14 seconds to deallocate network for instance.
Feb 24 16:03:23 compute-0 nova_compute[188703]: 2026-02-24 16:03:23.869 188707 DEBUG oslo_concurrency.lockutils [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:23 compute-0 nova_compute[188703]: 2026-02-24 16:03:23.870 188707 DEBUG oslo_concurrency.lockutils [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:23 compute-0 nova_compute[188703]: 2026-02-24 16:03:23.932 188707 DEBUG nova.compute.manager [req-1fa699d7-d564-4336-b4cc-ee6f2715cbf5 req-322a22a4-b508-4b02-9948-e8bc3e477493 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received event network-vif-deleted-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:03:23 compute-0 nova_compute[188703]: 2026-02-24 16:03:23.953 188707 DEBUG nova.compute.provider_tree [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:03:23 compute-0 nova_compute[188703]: 2026-02-24 16:03:23.968 188707 DEBUG nova.scheduler.client.report [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:03:23 compute-0 nova_compute[188703]: 2026-02-24 16:03:23.990 188707 DEBUG oslo_concurrency.lockutils [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:24 compute-0 nova_compute[188703]: 2026-02-24 16:03:24.022 188707 INFO nova.scheduler.client.report [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Deleted allocations for instance fd83ae88-f3e1-49ef-8167-b8451d014cf7
Feb 24 16:03:24 compute-0 nova_compute[188703]: 2026-02-24 16:03:24.151 188707 DEBUG oslo_concurrency.lockutils [None req-a850e9e6-2235-4fcd-990b-e70385ccd6ac bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:24 compute-0 nova_compute[188703]: 2026-02-24 16:03:24.599 188707 DEBUG nova.compute.manager [req-c19d1cba-3e83-4210-88b3-cf7bc2c6a9a3 req-f0ab70ff-d201-47d1-8efd-3c7cb056b662 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received event network-vif-plugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:03:24 compute-0 nova_compute[188703]: 2026-02-24 16:03:24.599 188707 DEBUG oslo_concurrency.lockutils [req-c19d1cba-3e83-4210-88b3-cf7bc2c6a9a3 req-f0ab70ff-d201-47d1-8efd-3c7cb056b662 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:24 compute-0 nova_compute[188703]: 2026-02-24 16:03:24.600 188707 DEBUG oslo_concurrency.lockutils [req-c19d1cba-3e83-4210-88b3-cf7bc2c6a9a3 req-f0ab70ff-d201-47d1-8efd-3c7cb056b662 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:24 compute-0 nova_compute[188703]: 2026-02-24 16:03:24.601 188707 DEBUG oslo_concurrency.lockutils [req-c19d1cba-3e83-4210-88b3-cf7bc2c6a9a3 req-f0ab70ff-d201-47d1-8efd-3c7cb056b662 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fd83ae88-f3e1-49ef-8167-b8451d014cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:24 compute-0 nova_compute[188703]: 2026-02-24 16:03:24.601 188707 DEBUG nova.compute.manager [req-c19d1cba-3e83-4210-88b3-cf7bc2c6a9a3 req-f0ab70ff-d201-47d1-8efd-3c7cb056b662 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] No waiting events found dispatching network-vif-plugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:03:24 compute-0 nova_compute[188703]: 2026-02-24 16:03:24.602 188707 WARNING nova.compute.manager [req-c19d1cba-3e83-4210-88b3-cf7bc2c6a9a3 req-f0ab70ff-d201-47d1-8efd-3c7cb056b662 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Received unexpected event network-vif-plugged-4fe2ff99-5ba5-49b4-a275-e8c5c9b51888 for instance with vm_state deleted and task_state None.
Feb 24 16:03:26 compute-0 nova_compute[188703]: 2026-02-24 16:03:26.639 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:27 compute-0 podman[250451]: 2026-02-24 16:03:27.141450385 +0000 UTC m=+0.102555223 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:03:27 compute-0 sshd-session[250450]: Connection closed by authenticating user root 64.236.161.24 port 46112 [preauth]
Feb 24 16:03:28 compute-0 nova_compute[188703]: 2026-02-24 16:03:28.303 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:29 compute-0 podman[204685]: time="2026-02-24T16:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:03:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:03:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3910 "" "Go-http-client/1.1"
Feb 24 16:03:31 compute-0 openstack_network_exporter[207830]: ERROR   16:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:03:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:03:31 compute-0 openstack_network_exporter[207830]: ERROR   16:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:03:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:03:31 compute-0 nova_compute[188703]: 2026-02-24 16:03:31.643 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:33 compute-0 nova_compute[188703]: 2026-02-24 16:03:33.306 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:36 compute-0 nova_compute[188703]: 2026-02-24 16:03:36.615 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771949001.6139696, fd83ae88-f3e1-49ef-8167-b8451d014cf7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:03:36 compute-0 nova_compute[188703]: 2026-02-24 16:03:36.616 188707 INFO nova.compute.manager [-] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] VM Stopped (Lifecycle Event)
Feb 24 16:03:36 compute-0 nova_compute[188703]: 2026-02-24 16:03:36.639 188707 DEBUG nova.compute.manager [None req-b58c5d68-4fea-4ee0-9ab2-489a6803fd9b - - - - - -] [instance: fd83ae88-f3e1-49ef-8167-b8451d014cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:03:36 compute-0 nova_compute[188703]: 2026-02-24 16:03:36.646 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:37 compute-0 podman[250475]: 2026-02-24 16:03:37.128801285 +0000 UTC m=+0.079979189 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:03:37 compute-0 podman[250476]: 2026-02-24 16:03:37.15218516 +0000 UTC m=+0.096925456 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:03:38 compute-0 nova_compute[188703]: 2026-02-24 16:03:38.308 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.833 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.834 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.834 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.839 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fed528a0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:03:39.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:03:41 compute-0 podman[250518]: 2026-02-24 16:03:41.162293356 +0000 UTC m=+0.109870483 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Feb 24 16:03:41 compute-0 podman[250517]: 2026-02-24 16:03:41.1798093 +0000 UTC m=+0.133000952 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, maintainer=Red Hat, Inc., release=1214.1726694543, container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:03:41 compute-0 nova_compute[188703]: 2026-02-24 16:03:41.650 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:43 compute-0 nova_compute[188703]: 2026-02-24 16:03:43.311 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:46 compute-0 podman[250555]: 2026-02-24 16:03:46.135545916 +0000 UTC m=+0.087012393 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, release=1770267347, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 24 16:03:46 compute-0 nova_compute[188703]: 2026-02-24 16:03:46.652 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:48 compute-0 nova_compute[188703]: 2026-02-24 16:03:48.313 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:51 compute-0 nova_compute[188703]: 2026-02-24 16:03:51.655 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:52 compute-0 podman[250578]: 2026-02-24 16:03:52.164631191 +0000 UTC m=+0.119988643 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 16:03:52 compute-0 podman[250579]: 2026-02-24 16:03:52.197494448 +0000 UTC m=+0.149545579 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 24 16:03:53 compute-0 nova_compute[188703]: 2026-02-24 16:03:53.316 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:53 compute-0 ovn_controller[98701]: 2026-02-24T16:03:53Z|00064|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb 24 16:03:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:55.728 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:03:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:55.729 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:03:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:03:55.729 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:03:56 compute-0 nova_compute[188703]: 2026-02-24 16:03:56.660 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:57 compute-0 sshd-session[250577]: Invalid user ubnt from 80.94.95.115 port 30134
Feb 24 16:03:58 compute-0 podman[250623]: 2026-02-24 16:03:58.055785709 +0000 UTC m=+0.098656504 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:03:58 compute-0 sshd-session[250577]: Connection closed by invalid user ubnt 80.94.95.115 port 30134 [preauth]
Feb 24 16:03:58 compute-0 nova_compute[188703]: 2026-02-24 16:03:58.319 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:03:59 compute-0 nova_compute[188703]: 2026-02-24 16:03:58.999 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:03:59 compute-0 podman[204685]: time="2026-02-24T16:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:03:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:03:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3907 "" "Go-http-client/1.1"
Feb 24 16:04:01 compute-0 openstack_network_exporter[207830]: ERROR   16:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:04:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:04:01 compute-0 openstack_network_exporter[207830]: ERROR   16:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:04:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:04:01 compute-0 nova_compute[188703]: 2026-02-24 16:04:01.664 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:01 compute-0 nova_compute[188703]: 2026-02-24 16:04:01.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:04:02 compute-0 nova_compute[188703]: 2026-02-24 16:04:02.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:04:02 compute-0 nova_compute[188703]: 2026-02-24 16:04:02.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:04:02 compute-0 nova_compute[188703]: 2026-02-24 16:04:02.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:04:02 compute-0 nova_compute[188703]: 2026-02-24 16:04:02.969 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:04:03 compute-0 nova_compute[188703]: 2026-02-24 16:04:03.321 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:05 compute-0 nova_compute[188703]: 2026-02-24 16:04:05.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:04:05 compute-0 nova_compute[188703]: 2026-02-24 16:04:05.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:04:05 compute-0 nova_compute[188703]: 2026-02-24 16:04:05.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:04:05 compute-0 nova_compute[188703]: 2026-02-24 16:04:05.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:04:06 compute-0 nova_compute[188703]: 2026-02-24 16:04:06.667 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:07 compute-0 nova_compute[188703]: 2026-02-24 16:04:07.940 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:04:07 compute-0 nova_compute[188703]: 2026-02-24 16:04:07.940 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:04:08 compute-0 podman[250647]: 2026-02-24 16:04:08.131986182 +0000 UTC m=+0.084260377 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:04:08 compute-0 podman[250646]: 2026-02-24 16:04:08.150607487 +0000 UTC m=+0.112717953 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:04:08 compute-0 nova_compute[188703]: 2026-02-24 16:04:08.327 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:10 compute-0 nova_compute[188703]: 2026-02-24 16:04:10.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:04:10 compute-0 nova_compute[188703]: 2026-02-24 16:04:10.980 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:04:10 compute-0 nova_compute[188703]: 2026-02-24 16:04:10.981 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:04:10 compute-0 nova_compute[188703]: 2026-02-24 16:04:10.981 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:04:10 compute-0 nova_compute[188703]: 2026-02-24 16:04:10.981 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.375 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.377 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5356MB free_disk=72.2344856262207GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.377 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.378 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.449 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.450 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.480 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.494 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.516 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.517 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:04:11 compute-0 nova_compute[188703]: 2026-02-24 16:04:11.671 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:12 compute-0 podman[250691]: 2026-02-24 16:04:12.138600393 +0000 UTC m=+0.093937864 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 24 16:04:12 compute-0 podman[250690]: 2026-02-24 16:04:12.177404974 +0000 UTC m=+0.131705046 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, architecture=x86_64, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, config_id=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:04:13 compute-0 nova_compute[188703]: 2026-02-24 16:04:13.330 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:16 compute-0 nova_compute[188703]: 2026-02-24 16:04:16.676 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:17 compute-0 podman[250727]: 2026-02-24 16:04:17.168773027 +0000 UTC m=+0.123502901 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 24 16:04:18 compute-0 nova_compute[188703]: 2026-02-24 16:04:18.334 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:21 compute-0 nova_compute[188703]: 2026-02-24 16:04:21.679 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:23 compute-0 podman[250749]: 2026-02-24 16:04:23.142973585 +0000 UTC m=+0.103292181 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223)
Feb 24 16:04:23 compute-0 podman[250750]: 2026-02-24 16:04:23.17536665 +0000 UTC m=+0.131554372 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_controller, container_name=ovn_controller)
Feb 24 16:04:23 compute-0 nova_compute[188703]: 2026-02-24 16:04:23.338 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:26 compute-0 nova_compute[188703]: 2026-02-24 16:04:26.683 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:28 compute-0 nova_compute[188703]: 2026-02-24 16:04:28.340 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:29 compute-0 podman[250795]: 2026-02-24 16:04:29.145451697 +0000 UTC m=+0.087635991 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:04:29 compute-0 podman[204685]: time="2026-02-24T16:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:04:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:04:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3910 "" "Go-http-client/1.1"
Feb 24 16:04:31 compute-0 openstack_network_exporter[207830]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:04:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:04:31 compute-0 openstack_network_exporter[207830]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:04:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:04:31 compute-0 nova_compute[188703]: 2026-02-24 16:04:31.686 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:33 compute-0 nova_compute[188703]: 2026-02-24 16:04:33.343 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:36 compute-0 nova_compute[188703]: 2026-02-24 16:04:36.689 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:38 compute-0 nova_compute[188703]: 2026-02-24 16:04:38.345 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:39 compute-0 podman[250818]: 2026-02-24 16:04:39.145832697 +0000 UTC m=+0.093683387 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 16:04:39 compute-0 podman[250819]: 2026-02-24 16:04:39.155246197 +0000 UTC m=+0.099368104 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 16:04:41 compute-0 nova_compute[188703]: 2026-02-24 16:04:41.692 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:43 compute-0 podman[250862]: 2026-02-24 16:04:43.161644521 +0000 UTC m=+0.113716670 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, version=9.4, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=kepler, architecture=x86_64, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, managed_by=edpm_ansible)
Feb 24 16:04:43 compute-0 podman[250863]: 2026-02-24 16:04:43.180031228 +0000 UTC m=+0.128430926 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0)
Feb 24 16:04:43 compute-0 nova_compute[188703]: 2026-02-24 16:04:43.347 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:46 compute-0 nova_compute[188703]: 2026-02-24 16:04:46.695 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:48 compute-0 podman[250902]: 2026-02-24 16:04:48.181502798 +0000 UTC m=+0.134461002 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1770267347, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 24 16:04:48 compute-0 nova_compute[188703]: 2026-02-24 16:04:48.350 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:51 compute-0 nova_compute[188703]: 2026-02-24 16:04:51.697 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:53 compute-0 nova_compute[188703]: 2026-02-24 16:04:53.354 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:54 compute-0 podman[250924]: 2026-02-24 16:04:54.145050215 +0000 UTC m=+0.100364832 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 24 16:04:54 compute-0 podman[250925]: 2026-02-24 16:04:54.196456283 +0000 UTC m=+0.147290606 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 24 16:04:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:04:55.729 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:04:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:04:55.729 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:04:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:04:55.729 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:04:56 compute-0 nova_compute[188703]: 2026-02-24 16:04:56.701 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:58 compute-0 nova_compute[188703]: 2026-02-24 16:04:58.355 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:04:59 compute-0 nova_compute[188703]: 2026-02-24 16:04:59.518 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:04:59 compute-0 podman[204685]: time="2026-02-24T16:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:04:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:04:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3907 "" "Go-http-client/1.1"
Feb 24 16:05:00 compute-0 podman[250967]: 2026-02-24 16:05:00.184917297 +0000 UTC m=+0.136023986 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:05:01 compute-0 openstack_network_exporter[207830]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:05:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:05:01 compute-0 openstack_network_exporter[207830]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:05:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:05:01 compute-0 nova_compute[188703]: 2026-02-24 16:05:01.705 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:01 compute-0 nova_compute[188703]: 2026-02-24 16:05:01.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:05:03 compute-0 nova_compute[188703]: 2026-02-24 16:05:03.359 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:03 compute-0 nova_compute[188703]: 2026-02-24 16:05:03.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:05:03 compute-0 nova_compute[188703]: 2026-02-24 16:05:03.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:05:03 compute-0 nova_compute[188703]: 2026-02-24 16:05:03.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:05:03 compute-0 nova_compute[188703]: 2026-02-24 16:05:03.959 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:05:05 compute-0 nova_compute[188703]: 2026-02-24 16:05:05.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:05:06 compute-0 nova_compute[188703]: 2026-02-24 16:05:06.708 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:07 compute-0 nova_compute[188703]: 2026-02-24 16:05:07.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:05:07 compute-0 nova_compute[188703]: 2026-02-24 16:05:07.940 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:05:07 compute-0 nova_compute[188703]: 2026-02-24 16:05:07.963 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:05:07 compute-0 nova_compute[188703]: 2026-02-24 16:05:07.964 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:05:07 compute-0 nova_compute[188703]: 2026-02-24 16:05:07.965 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:05:07 compute-0 nova_compute[188703]: 2026-02-24 16:05:07.966 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:05:08 compute-0 nova_compute[188703]: 2026-02-24 16:05:08.360 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:10 compute-0 podman[250991]: 2026-02-24 16:05:10.133459567 +0000 UTC m=+0.090503539 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:05:10 compute-0 podman[250992]: 2026-02-24 16:05:10.137576171 +0000 UTC m=+0.097513193 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:05:11 compute-0 nova_compute[188703]: 2026-02-24 16:05:11.711 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:11 compute-0 nova_compute[188703]: 2026-02-24 16:05:11.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:05:11 compute-0 nova_compute[188703]: 2026-02-24 16:05:11.970 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:05:11 compute-0 nova_compute[188703]: 2026-02-24 16:05:11.970 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:05:11 compute-0 nova_compute[188703]: 2026-02-24 16:05:11.970 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:05:11 compute-0 nova_compute[188703]: 2026-02-24 16:05:11.970 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.293 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.295 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5358MB free_disk=72.2344856262207GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.296 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.296 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.358 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.358 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.385 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.398 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.401 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:05:12 compute-0 nova_compute[188703]: 2026-02-24 16:05:12.401 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:05:13 compute-0 nova_compute[188703]: 2026-02-24 16:05:13.363 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:14 compute-0 podman[251032]: 2026-02-24 16:05:14.163105953 +0000 UTC m=+0.120723914 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, release-0.7.12=, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Feb 24 16:05:14 compute-0 podman[251033]: 2026-02-24 16:05:14.162837265 +0000 UTC m=+0.119102838 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 16:05:16 compute-0 nova_compute[188703]: 2026-02-24 16:05:16.714 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:18 compute-0 nova_compute[188703]: 2026-02-24 16:05:18.365 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:19 compute-0 podman[251072]: 2026-02-24 16:05:19.143066479 +0000 UTC m=+0.101303497 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.7, maintainer=Red Hat, Inc., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc.)
Feb 24 16:05:21 compute-0 nova_compute[188703]: 2026-02-24 16:05:21.716 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:23 compute-0 nova_compute[188703]: 2026-02-24 16:05:23.367 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:25 compute-0 podman[251093]: 2026-02-24 16:05:25.147659037 +0000 UTC m=+0.093926203 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0)
Feb 24 16:05:25 compute-0 podman[251094]: 2026-02-24 16:05:25.201283397 +0000 UTC m=+0.150148305 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 24 16:05:25 compute-0 sshd-session[251137]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 16:05:26 compute-0 nova_compute[188703]: 2026-02-24 16:05:26.720 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:28 compute-0 nova_compute[188703]: 2026-02-24 16:05:28.372 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:29 compute-0 podman[204685]: time="2026-02-24T16:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:05:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:05:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3910 "" "Go-http-client/1.1"
Feb 24 16:05:31 compute-0 podman[251139]: 2026-02-24 16:05:31.124716027 +0000 UTC m=+0.083562738 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:05:31 compute-0 openstack_network_exporter[207830]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:05:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:05:31 compute-0 openstack_network_exporter[207830]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:05:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:05:31 compute-0 nova_compute[188703]: 2026-02-24 16:05:31.723 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:33 compute-0 nova_compute[188703]: 2026-02-24 16:05:33.375 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:36 compute-0 nova_compute[188703]: 2026-02-24 16:05:36.726 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:38 compute-0 nova_compute[188703]: 2026-02-24 16:05:38.378 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.834 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.834 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.835 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.837 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26022839b0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 rsyslogd[239437]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:05:39.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:05:41 compute-0 podman[251164]: 2026-02-24 16:05:41.13790266 +0000 UTC m=+0.093679737 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 16:05:41 compute-0 podman[251165]: 2026-02-24 16:05:41.186352347 +0000 UTC m=+0.136888999 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 24 16:05:41 compute-0 nova_compute[188703]: 2026-02-24 16:05:41.729 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:43 compute-0 nova_compute[188703]: 2026-02-24 16:05:43.380 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:44 compute-0 podman[251206]: 2026-02-24 16:05:44.761328641 +0000 UTC m=+0.085423847 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, release-0.7.12=, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public)
Feb 24 16:05:44 compute-0 podman[251207]: 2026-02-24 16:05:44.805570987 +0000 UTC m=+0.123442430 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, config_id=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 16:05:46 compute-0 nova_compute[188703]: 2026-02-24 16:05:46.733 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:48 compute-0 nova_compute[188703]: 2026-02-24 16:05:48.382 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:50 compute-0 podman[251247]: 2026-02-24 16:05:50.128641289 +0000 UTC m=+0.080914183 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9/ubi-minimal)
Feb 24 16:05:51 compute-0 nova_compute[188703]: 2026-02-24 16:05:51.738 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:53 compute-0 nova_compute[188703]: 2026-02-24 16:05:53.385 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:05:55.732 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:05:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:05:55.732 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:05:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:05:55.732 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:05:56 compute-0 podman[251268]: 2026-02-24 16:05:56.133179127 +0000 UTC m=+0.098363886 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 16:05:56 compute-0 podman[251269]: 2026-02-24 16:05:56.168429243 +0000 UTC m=+0.131569945 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Feb 24 16:05:56 compute-0 nova_compute[188703]: 2026-02-24 16:05:56.746 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:58 compute-0 nova_compute[188703]: 2026-02-24 16:05:58.386 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:05:59 compute-0 podman[204685]: time="2026-02-24T16:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:05:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:05:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3913 "" "Go-http-client/1.1"
Feb 24 16:06:00 compute-0 nova_compute[188703]: 2026-02-24 16:06:00.403 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:06:01 compute-0 openstack_network_exporter[207830]: ERROR   16:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:06:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:06:01 compute-0 openstack_network_exporter[207830]: ERROR   16:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:06:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:06:01 compute-0 nova_compute[188703]: 2026-02-24 16:06:01.751 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:01 compute-0 nova_compute[188703]: 2026-02-24 16:06:01.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:06:02 compute-0 podman[251310]: 2026-02-24 16:06:02.152587929 +0000 UTC m=+0.111444076 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:06:03 compute-0 nova_compute[188703]: 2026-02-24 16:06:03.388 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:03 compute-0 nova_compute[188703]: 2026-02-24 16:06:03.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:06:03 compute-0 nova_compute[188703]: 2026-02-24 16:06:03.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:06:03 compute-0 nova_compute[188703]: 2026-02-24 16:06:03.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:06:03 compute-0 nova_compute[188703]: 2026-02-24 16:06:03.966 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:06:06 compute-0 nova_compute[188703]: 2026-02-24 16:06:06.756 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:06 compute-0 nova_compute[188703]: 2026-02-24 16:06:06.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:06:07 compute-0 nova_compute[188703]: 2026-02-24 16:06:07.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:06:07 compute-0 nova_compute[188703]: 2026-02-24 16:06:07.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:06:07 compute-0 nova_compute[188703]: 2026-02-24 16:06:07.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:06:08 compute-0 nova_compute[188703]: 2026-02-24 16:06:08.392 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:08 compute-0 nova_compute[188703]: 2026-02-24 16:06:08.940 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:06:09 compute-0 nova_compute[188703]: 2026-02-24 16:06:09.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:06:11 compute-0 nova_compute[188703]: 2026-02-24 16:06:11.759 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:11 compute-0 sshd-session[251334]: Connection closed by authenticating user root 172.214.45.193 port 24584 [preauth]
Feb 24 16:06:12 compute-0 podman[251337]: 2026-02-24 16:06:12.127241172 +0000 UTC m=+0.080562362 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 16:06:12 compute-0 podman[251336]: 2026-02-24 16:06:12.136588301 +0000 UTC m=+0.090629411 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:06:12 compute-0 nova_compute[188703]: 2026-02-24 16:06:12.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:06:12 compute-0 nova_compute[188703]: 2026-02-24 16:06:12.984 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:06:12 compute-0 nova_compute[188703]: 2026-02-24 16:06:12.984 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:06:12 compute-0 nova_compute[188703]: 2026-02-24 16:06:12.984 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:06:12 compute-0 nova_compute[188703]: 2026-02-24 16:06:12.985 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.392 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.394 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5355MB free_disk=72.2344856262207GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.394 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.395 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.396 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.518 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.518 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.542 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.558 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.559 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.588 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.609 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.640 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.662 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.665 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:06:13 compute-0 nova_compute[188703]: 2026-02-24 16:06:13.666 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:06:15 compute-0 podman[251378]: 2026-02-24 16:06:15.114345447 +0000 UTC m=+0.077707474 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 24 16:06:15 compute-0 podman[251377]: 2026-02-24 16:06:15.124008415 +0000 UTC m=+0.083422312 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, architecture=x86_64, release-0.7.12=)
Feb 24 16:06:16 compute-0 nova_compute[188703]: 2026-02-24 16:06:16.763 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:18 compute-0 nova_compute[188703]: 2026-02-24 16:06:18.398 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:21 compute-0 podman[251416]: 2026-02-24 16:06:21.135060494 +0000 UTC m=+0.087000021 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.7, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=openstack_network_exporter, architecture=x86_64, release=1770267347, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 24 16:06:21 compute-0 nova_compute[188703]: 2026-02-24 16:06:21.771 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:23 compute-0 nova_compute[188703]: 2026-02-24 16:06:23.401 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:26 compute-0 nova_compute[188703]: 2026-02-24 16:06:26.778 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:27 compute-0 podman[251438]: 2026-02-24 16:06:27.131787167 +0000 UTC m=+0.087011361 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:06:27 compute-0 podman[251439]: 2026-02-24 16:06:27.165222933 +0000 UTC m=+0.112276141 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:06:28 compute-0 nova_compute[188703]: 2026-02-24 16:06:28.405 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:29 compute-0 podman[204685]: time="2026-02-24T16:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:06:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:06:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3908 "" "Go-http-client/1.1"
Feb 24 16:06:31 compute-0 openstack_network_exporter[207830]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:06:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:06:31 compute-0 openstack_network_exporter[207830]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:06:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:06:31 compute-0 nova_compute[188703]: 2026-02-24 16:06:31.781 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:33 compute-0 podman[251479]: 2026-02-24 16:06:33.134258169 +0000 UTC m=+0.095214949 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:06:33 compute-0 nova_compute[188703]: 2026-02-24 16:06:33.408 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:36 compute-0 nova_compute[188703]: 2026-02-24 16:06:36.783 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:38 compute-0 nova_compute[188703]: 2026-02-24 16:06:38.413 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:41 compute-0 nova_compute[188703]: 2026-02-24 16:06:41.787 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:43 compute-0 podman[251503]: 2026-02-24 16:06:43.144037915 +0000 UTC m=+0.101354939 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:06:43 compute-0 podman[251504]: 2026-02-24 16:06:43.160166381 +0000 UTC m=+0.110552703 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:06:43 compute-0 nova_compute[188703]: 2026-02-24 16:06:43.418 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:46 compute-0 podman[251544]: 2026-02-24 16:06:46.15139891 +0000 UTC m=+0.103666703 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vendor=Red Hat, Inc., name=ubi9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=kepler, container_name=kepler)
Feb 24 16:06:46 compute-0 podman[251545]: 2026-02-24 16:06:46.185639858 +0000 UTC m=+0.129158519 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:06:46 compute-0 nova_compute[188703]: 2026-02-24 16:06:46.790 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:48 compute-0 nova_compute[188703]: 2026-02-24 16:06:48.419 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:51 compute-0 nova_compute[188703]: 2026-02-24 16:06:51.793 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:52 compute-0 podman[251585]: 2026-02-24 16:06:52.142683822 +0000 UTC m=+0.099508337 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, vcs-type=git, version=9.7, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:06:53 compute-0 nova_compute[188703]: 2026-02-24 16:06:53.422 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:06:55.734 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:06:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:06:55.734 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:06:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:06:55.735 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:06:56 compute-0 nova_compute[188703]: 2026-02-24 16:06:56.795 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:58 compute-0 podman[251607]: 2026-02-24 16:06:58.1594566 +0000 UTC m=+0.111196710 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 24 16:06:58 compute-0 podman[251606]: 2026-02-24 16:06:58.17279373 +0000 UTC m=+0.123895073 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:06:58 compute-0 nova_compute[188703]: 2026-02-24 16:06:58.425 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:06:59 compute-0 podman[204685]: time="2026-02-24T16:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:06:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:06:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3906 "" "Go-http-client/1.1"
Feb 24 16:07:00 compute-0 nova_compute[188703]: 2026-02-24 16:07:00.667 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:01 compute-0 openstack_network_exporter[207830]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:07:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:07:01 compute-0 openstack_network_exporter[207830]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:07:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:07:01 compute-0 nova_compute[188703]: 2026-02-24 16:07:01.797 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:01 compute-0 nova_compute[188703]: 2026-02-24 16:07:01.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:03 compute-0 nova_compute[188703]: 2026-02-24 16:07:03.427 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:03 compute-0 nova_compute[188703]: 2026-02-24 16:07:03.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:03 compute-0 nova_compute[188703]: 2026-02-24 16:07:03.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:07:03 compute-0 nova_compute[188703]: 2026-02-24 16:07:03.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:07:03 compute-0 nova_compute[188703]: 2026-02-24 16:07:03.970 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:07:04 compute-0 podman[251650]: 2026-02-24 16:07:04.138753401 +0000 UTC m=+0.097029850 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:07:06 compute-0 nova_compute[188703]: 2026-02-24 16:07:06.801 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:06 compute-0 nova_compute[188703]: 2026-02-24 16:07:06.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:07 compute-0 nova_compute[188703]: 2026-02-24 16:07:07.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:08 compute-0 nova_compute[188703]: 2026-02-24 16:07:08.429 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:08 compute-0 nova_compute[188703]: 2026-02-24 16:07:08.961 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:09 compute-0 nova_compute[188703]: 2026-02-24 16:07:09.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:09 compute-0 nova_compute[188703]: 2026-02-24 16:07:09.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:09 compute-0 nova_compute[188703]: 2026-02-24 16:07:09.945 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:09 compute-0 nova_compute[188703]: 2026-02-24 16:07:09.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:07:09 compute-0 nova_compute[188703]: 2026-02-24 16:07:09.946 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:09 compute-0 nova_compute[188703]: 2026-02-24 16:07:09.947 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 16:07:11 compute-0 nova_compute[188703]: 2026-02-24 16:07:11.805 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:11 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:07:11.891 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:07:11 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:07:11.892 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:07:11 compute-0 nova_compute[188703]: 2026-02-24 16:07:11.895 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:11 compute-0 nova_compute[188703]: 2026-02-24 16:07:11.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:12 compute-0 nova_compute[188703]: 2026-02-24 16:07:12.958 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:12 compute-0 nova_compute[188703]: 2026-02-24 16:07:12.985 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:07:12 compute-0 nova_compute[188703]: 2026-02-24 16:07:12.985 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:07:12 compute-0 nova_compute[188703]: 2026-02-24 16:07:12.986 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:07:12 compute-0 nova_compute[188703]: 2026-02-24 16:07:12.986 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.386 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.388 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5352MB free_disk=72.2344856262207GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.388 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.388 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.436 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.717 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.718 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.794 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.811 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.814 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:07:13 compute-0 nova_compute[188703]: 2026-02-24 16:07:13.814 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:07:14 compute-0 podman[251676]: 2026-02-24 16:07:14.139914867 +0000 UTC m=+0.095066613 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:07:14 compute-0 podman[251677]: 2026-02-24 16:07:14.155705795 +0000 UTC m=+0.105642808 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 16:07:14 compute-0 nova_compute[188703]: 2026-02-24 16:07:14.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:14 compute-0 nova_compute[188703]: 2026-02-24 16:07:14.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 16:07:14 compute-0 nova_compute[188703]: 2026-02-24 16:07:14.967 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 16:07:16 compute-0 nova_compute[188703]: 2026-02-24 16:07:16.808 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:17 compute-0 podman[251715]: 2026-02-24 16:07:17.115036389 +0000 UTC m=+0.072385296 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=kepler, release-0.7.12=, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-type=git, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 24 16:07:17 compute-0 podman[251716]: 2026-02-24 16:07:17.14828545 +0000 UTC m=+0.097775590 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 24 16:07:18 compute-0 nova_compute[188703]: 2026-02-24 16:07:18.435 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:21 compute-0 nova_compute[188703]: 2026-02-24 16:07:21.812 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:07:21.904 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:07:23 compute-0 podman[251753]: 2026-02-24 16:07:23.13442503 +0000 UTC m=+0.091016893 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 16:07:23 compute-0 nova_compute[188703]: 2026-02-24 16:07:23.439 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:26 compute-0 nova_compute[188703]: 2026-02-24 16:07:26.816 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:28 compute-0 nova_compute[188703]: 2026-02-24 16:07:28.442 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:29 compute-0 podman[251775]: 2026-02-24 16:07:29.139744002 +0000 UTC m=+0.095256451 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 24 16:07:29 compute-0 podman[251776]: 2026-02-24 16:07:29.169850226 +0000 UTC m=+0.124968233 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 24 16:07:29 compute-0 podman[204685]: time="2026-02-24T16:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:07:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:07:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Feb 24 16:07:30 compute-0 sshd-session[251773]: Received disconnect from 111.228.14.125 port 56622:11:  [preauth]
Feb 24 16:07:30 compute-0 sshd-session[251773]: Disconnected from authenticating user root 111.228.14.125 port 56622 [preauth]
Feb 24 16:07:31 compute-0 openstack_network_exporter[207830]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:07:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:07:31 compute-0 openstack_network_exporter[207830]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:07:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:07:31 compute-0 nova_compute[188703]: 2026-02-24 16:07:31.819 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:33 compute-0 nova_compute[188703]: 2026-02-24 16:07:33.445 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:35 compute-0 podman[251819]: 2026-02-24 16:07:35.112044867 +0000 UTC m=+0.071921833 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:07:36 compute-0 nova_compute[188703]: 2026-02-24 16:07:36.822 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:37 compute-0 nova_compute[188703]: 2026-02-24 16:07:37.703 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:07:38 compute-0 nova_compute[188703]: 2026-02-24 16:07:38.448 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.836 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.836 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.850 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:07:39.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:07:41 compute-0 nova_compute[188703]: 2026-02-24 16:07:41.825 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:42 compute-0 ovn_controller[98701]: 2026-02-24T16:07:42Z|00065|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Feb 24 16:07:43 compute-0 nova_compute[188703]: 2026-02-24 16:07:43.450 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:44 compute-0 podman[251844]: 2026-02-24 16:07:44.796990244 +0000 UTC m=+0.106681116 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:07:44 compute-0 podman[251843]: 2026-02-24 16:07:44.802050725 +0000 UTC m=+0.116450767 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:07:46 compute-0 nova_compute[188703]: 2026-02-24 16:07:46.829 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:48 compute-0 podman[251888]: 2026-02-24 16:07:48.131121771 +0000 UTC m=+0.095862836 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release-0.7.12=)
Feb 24 16:07:48 compute-0 podman[251889]: 2026-02-24 16:07:48.130983987 +0000 UTC m=+0.093452569 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 24 16:07:48 compute-0 nova_compute[188703]: 2026-02-24 16:07:48.452 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:50 compute-0 nova_compute[188703]: 2026-02-24 16:07:50.383 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:50 compute-0 nova_compute[188703]: 2026-02-24 16:07:50.457 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:50 compute-0 nova_compute[188703]: 2026-02-24 16:07:50.529 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:51 compute-0 nova_compute[188703]: 2026-02-24 16:07:51.832 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:53 compute-0 nova_compute[188703]: 2026-02-24 16:07:53.455 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:54 compute-0 podman[251929]: 2026-02-24 16:07:54.143306491 +0000 UTC m=+0.108585628 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, version=9.7, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., vcs-type=git, config_id=openstack_network_exporter, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 16:07:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:07:55.735 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:07:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:07:55.736 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:07:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:07:55.736 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:07:56 compute-0 nova_compute[188703]: 2026-02-24 16:07:56.834 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:57 compute-0 nova_compute[188703]: 2026-02-24 16:07:57.568 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:58 compute-0 nova_compute[188703]: 2026-02-24 16:07:58.458 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:07:59 compute-0 podman[204685]: time="2026-02-24T16:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:07:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:07:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Feb 24 16:08:00 compute-0 podman[251949]: 2026-02-24 16:08:00.151484643 +0000 UTC m=+0.106842410 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223)
Feb 24 16:08:00 compute-0 podman[251950]: 2026-02-24 16:08:00.197767646 +0000 UTC m=+0.147223400 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb 24 16:08:00 compute-0 nova_compute[188703]: 2026-02-24 16:08:00.235 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:00 compute-0 nova_compute[188703]: 2026-02-24 16:08:00.584 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:00 compute-0 nova_compute[188703]: 2026-02-24 16:08:00.654 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:00 compute-0 nova_compute[188703]: 2026-02-24 16:08:00.655 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:01 compute-0 openstack_network_exporter[207830]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:08:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:08:01 compute-0 openstack_network_exporter[207830]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:08:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:08:01 compute-0 nova_compute[188703]: 2026-02-24 16:08:01.836 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:02 compute-0 nova_compute[188703]: 2026-02-24 16:08:02.007 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:02 compute-0 nova_compute[188703]: 2026-02-24 16:08:02.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:03 compute-0 nova_compute[188703]: 2026-02-24 16:08:03.461 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:04 compute-0 nova_compute[188703]: 2026-02-24 16:08:04.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:04 compute-0 nova_compute[188703]: 2026-02-24 16:08:04.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:08:04 compute-0 nova_compute[188703]: 2026-02-24 16:08:04.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:08:04 compute-0 nova_compute[188703]: 2026-02-24 16:08:04.963 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:08:06 compute-0 podman[251992]: 2026-02-24 16:08:06.092872503 +0000 UTC m=+0.059288423 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:08:06 compute-0 nova_compute[188703]: 2026-02-24 16:08:06.840 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:07 compute-0 nova_compute[188703]: 2026-02-24 16:08:07.303 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:07 compute-0 nova_compute[188703]: 2026-02-24 16:08:07.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:08 compute-0 nova_compute[188703]: 2026-02-24 16:08:08.465 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:09 compute-0 nova_compute[188703]: 2026-02-24 16:08:09.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:09 compute-0 nova_compute[188703]: 2026-02-24 16:08:09.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:09 compute-0 nova_compute[188703]: 2026-02-24 16:08:09.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:09 compute-0 nova_compute[188703]: 2026-02-24 16:08:09.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:08:10 compute-0 nova_compute[188703]: 2026-02-24 16:08:10.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:11 compute-0 nova_compute[188703]: 2026-02-24 16:08:11.843 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:12 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:12.029 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.029 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "14d740e5-75fa-4dec-a80f-f967c1cd1930" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.030 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:12 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:12.031 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.032 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.072 188707 DEBUG nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.198 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.199 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.218 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.219 188707 INFO nova.compute.claims [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.394 188707 DEBUG nova.compute.provider_tree [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.418 188707 DEBUG nova.scheduler.client.report [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.464 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.465 188707 DEBUG nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.517 188707 DEBUG nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.518 188707 DEBUG nova.network.neutron [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.551 188707 INFO nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.569 188707 DEBUG nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.688 188707 DEBUG nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.689 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.690 188707 INFO nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Creating image(s)
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.690 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "/var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.691 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "/var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.691 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "/var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.692 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:12 compute-0 nova_compute[188703]: 2026-02-24 16:08:12.692 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.059 188707 DEBUG nova.policy [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11baf60433794c759ac9aae534db1341', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '60a6dc6c7c8f4f06b380816ca7e12999', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.388 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.388 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.405 188707 DEBUG nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.467 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.478 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.478 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.487 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.487 188707 INFO nova.compute.claims [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.810 188707 DEBUG nova.compute.provider_tree [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.841 188707 DEBUG nova.scheduler.client.report [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.868 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.869 188707 DEBUG nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.929 188707 DEBUG nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.930 188707 DEBUG nova.network.neutron [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.955 188707 INFO nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:08:13 compute-0 nova_compute[188703]: 2026-02-24 16:08:13.978 188707 DEBUG nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.090 188707 DEBUG nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.091 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.093 188707 INFO nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Creating image(s)
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.094 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "/var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.094 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "/var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.095 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "/var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.095 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.167 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.225 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683.part --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.226 188707 DEBUG nova.virt.images [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] ee41af80-6a60-4735-8135-3a06de2a36b2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.229 188707 DEBUG nova.privsep.utils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.230 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683.part /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.498 188707 DEBUG nova.policy [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '40089d2ccf484a7c9ecdf03cf6fe53bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff039f17be824e0da1015761ba1fc96a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.536 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683.part /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683.converted" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.540 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.601 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.603 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.629 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.630 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.649 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.662 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.716 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.717 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.718 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.743 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.760 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.763 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.805 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.806 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.848 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.849 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.850 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.864 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.888 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.936 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.937 188707 DEBUG nova.virt.disk.api [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Checking if we can resize image /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.938 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.954 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.956 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.956 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.992 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.993 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.994 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.995 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.997 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:14 compute-0 nova_compute[188703]: 2026-02-24 16:08:14.998 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.000 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.018 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.020 188707 DEBUG nova.virt.disk.api [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Cannot resize image /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.021 188707 DEBUG nova.objects.instance [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lazy-loading 'migration_context' on Instance uuid 14d740e5-75fa-4dec-a80f-f967c1cd1930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.040 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.041 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Ensure instance console log exists: /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.041 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.042 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.043 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.074 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.075 188707 DEBUG nova.virt.disk.api [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Checking if we can resize image /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.075 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:15 compute-0 podman[252055]: 2026-02-24 16:08:15.125497401 +0000 UTC m=+0.081920220 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.132 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.133 188707 DEBUG nova.virt.disk.api [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Cannot resize image /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.133 188707 DEBUG nova.objects.instance [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lazy-loading 'migration_context' on Instance uuid 4ed039f2-92fd-4c07-9a3c-df2da1172e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.156 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:08:15 compute-0 podman[252056]: 2026-02-24 16:08:15.157250531 +0000 UTC m=+0.107683674 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.156 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Ensure instance console log exists: /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.157 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.157 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.158 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.362 188707 DEBUG nova.network.neutron [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Successfully created port: ca1db542-80c5-48ff-a87c-ab82662c6823 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.399 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "e99b5727-be77-4c73-a60b-26188853674c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.399 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.431 188707 DEBUG nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.445 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.446 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5340MB free_disk=72.19978713989258GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.446 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.446 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.641 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.712 188707 DEBUG nova.network.neutron [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Successfully created port: c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.722 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 14d740e5-75fa-4dec-a80f-f967c1cd1930 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.723 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4ed039f2-92fd-4c07-9a3c-df2da1172e12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.752 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance e99b5727-be77-4c73-a60b-26188853674c has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.753 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.753 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.855 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.870 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.896 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.896 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.897 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.908 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:08:15 compute-0 nova_compute[188703]: 2026-02-24 16:08:15.908 188707 INFO nova.compute.claims [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.071 188707 DEBUG nova.compute.provider_tree [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.090 188707 DEBUG nova.scheduler.client.report [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.119 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.120 188707 DEBUG nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.166 188707 DEBUG nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.166 188707 DEBUG nova.network.neutron [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.184 188707 INFO nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.205 188707 DEBUG nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.274 188707 DEBUG nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.275 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.276 188707 INFO nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Creating image(s)
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.276 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "/var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.277 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "/var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.277 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "/var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.289 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.354 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.355 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.355 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.366 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.444 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.445 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.499 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.501 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.502 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.557 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.558 188707 DEBUG nova.virt.disk.api [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Checking if we can resize image /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.559 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.606 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.608 188707 DEBUG nova.virt.disk.api [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Cannot resize image /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.609 188707 DEBUG nova.objects.instance [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lazy-loading 'migration_context' on Instance uuid e99b5727-be77-4c73-a60b-26188853674c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.628 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.629 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Ensure instance console log exists: /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.629 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.630 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.630 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.650 188707 DEBUG nova.policy [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5f4c6a77687e48f2b855bd5e6fb8bb86', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8b20224ac879431e8ca256556525e6fd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:08:16 compute-0 nova_compute[188703]: 2026-02-24 16:08:16.846 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:17 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:17.034 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:17 compute-0 nova_compute[188703]: 2026-02-24 16:08:17.845 188707 DEBUG nova.network.neutron [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Successfully updated port: c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:08:17 compute-0 nova_compute[188703]: 2026-02-24 16:08:17.866 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:08:17 compute-0 nova_compute[188703]: 2026-02-24 16:08:17.866 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquired lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:08:17 compute-0 nova_compute[188703]: 2026-02-24 16:08:17.867 188707 DEBUG nova.network.neutron [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.184 188707 DEBUG nova.network.neutron [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.288 188707 DEBUG nova.network.neutron [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Successfully created port: 5d11a818-6e9b-4361-9065-15b7ad0b90cb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.296 188707 DEBUG nova.network.neutron [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Successfully updated port: ca1db542-80c5-48ff-a87c-ab82662c6823 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.314 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "refresh_cache-14d740e5-75fa-4dec-a80f-f967c1cd1930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.315 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquired lock "refresh_cache-14d740e5-75fa-4dec-a80f-f967c1cd1930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.315 188707 DEBUG nova.network.neutron [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.408 188707 DEBUG nova.compute.manager [req-57f05716-da8b-4d2b-bddb-f95f74e2fc24 req-b654693b-2c7d-4845-b5e5-c75bb57c2d0c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-changed-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.409 188707 DEBUG nova.compute.manager [req-57f05716-da8b-4d2b-bddb-f95f74e2fc24 req-b654693b-2c7d-4845-b5e5-c75bb57c2d0c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Refreshing instance network info cache due to event network-changed-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.409 188707 DEBUG oslo_concurrency.lockutils [req-57f05716-da8b-4d2b-bddb-f95f74e2fc24 req-b654693b-2c7d-4845-b5e5-c75bb57c2d0c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.471 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.530 188707 DEBUG nova.network.neutron [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.547 188707 DEBUG nova.compute.manager [req-dba7982d-d124-45d6-bfe7-b3fe10da72f7 req-d48643ba-5b9b-4a5a-a6f3-d367babb6752 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received event network-changed-ca1db542-80c5-48ff-a87c-ab82662c6823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.547 188707 DEBUG nova.compute.manager [req-dba7982d-d124-45d6-bfe7-b3fe10da72f7 req-d48643ba-5b9b-4a5a-a6f3-d367babb6752 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Refreshing instance network info cache due to event network-changed-ca1db542-80c5-48ff-a87c-ab82662c6823. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:08:18 compute-0 nova_compute[188703]: 2026-02-24 16:08:18.548 188707 DEBUG oslo_concurrency.lockutils [req-dba7982d-d124-45d6-bfe7-b3fe10da72f7 req-d48643ba-5b9b-4a5a-a6f3-d367babb6752 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-14d740e5-75fa-4dec-a80f-f967c1cd1930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:08:19 compute-0 podman[252118]: 2026-02-24 16:08:19.13406173 +0000 UTC m=+0.089332695 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, config_id=kepler, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, managed_by=edpm_ansible)
Feb 24 16:08:19 compute-0 podman[252119]: 2026-02-24 16:08:19.160637427 +0000 UTC m=+0.116797966 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.822 188707 DEBUG nova.network.neutron [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updating instance_info_cache with network_info: [{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.852 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Releasing lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.853 188707 DEBUG nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Instance network_info: |[{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.854 188707 DEBUG oslo_concurrency.lockutils [req-57f05716-da8b-4d2b-bddb-f95f74e2fc24 req-b654693b-2c7d-4845-b5e5-c75bb57c2d0c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.854 188707 DEBUG nova.network.neutron [req-57f05716-da8b-4d2b-bddb-f95f74e2fc24 req-b654693b-2c7d-4845-b5e5-c75bb57c2d0c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Refreshing network info cache for port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.859 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Start _get_guest_xml network_info=[{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.869 188707 WARNING nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.880 188707 DEBUG nova.virt.libvirt.host [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.881 188707 DEBUG nova.virt.libvirt.host [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.888 188707 DEBUG nova.virt.libvirt.host [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.889 188707 DEBUG nova.virt.libvirt.host [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.889 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.890 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.890 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.891 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.891 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.891 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.891 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.892 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.892 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.892 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.892 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.893 188707 DEBUG nova.virt.hardware [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.897 188707 DEBUG nova.virt.libvirt.vif [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1086727361',display_name='tempest-AttachInterfacesUnderV243Test-server-1086727361',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1086727361',id=7,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFPRzxrgD/mVIpbyPawYD3WAG4tYU0QjDhPzO0JVhM25FdcYSiej4ytWWlcxVnG8odOA7sUe2Mbk3XrtHW6cAicJSYxZJb3LRfc4Sq6paFTTk27LRVk6RCnMPCsWqOW0iw==',key_name='tempest-keypair-34402587',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff039f17be824e0da1015761ba1fc96a',ramdisk_id='',reservation_id='r-hc07cw81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1803128698',owner_user_name='tempest-AttachInterfacesUnderV243Test-1803128698-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:08:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40089d2ccf484a7c9ecdf03cf6fe53bb',uuid=4ed039f2-92fd-4c07-9a3c-df2da1172e12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.897 188707 DEBUG nova.network.os_vif_util [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Converting VIF {"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.898 188707 DEBUG nova.network.os_vif_util [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:99:52,bridge_name='br-int',has_traffic_filtering=True,id=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4,network=Network(ea8bd642-3dcc-421c-b6d8-009d58526417),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc26aa9e8-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.899 188707 DEBUG nova.objects.instance [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ed039f2-92fd-4c07-9a3c-df2da1172e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.913 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <uuid>4ed039f2-92fd-4c07-9a3c-df2da1172e12</uuid>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <name>instance-00000007</name>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1086727361</nova:name>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:08:19</nova:creationTime>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:08:19 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:08:19 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:08:19 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:08:19 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:08:19 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:08:19 compute-0 nova_compute[188703]:         <nova:user uuid="40089d2ccf484a7c9ecdf03cf6fe53bb">tempest-AttachInterfacesUnderV243Test-1803128698-project-member</nova:user>
Feb 24 16:08:19 compute-0 nova_compute[188703]:         <nova:project uuid="ff039f17be824e0da1015761ba1fc96a">tempest-AttachInterfacesUnderV243Test-1803128698</nova:project>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="ee41af80-6a60-4735-8135-3a06de2a36b2"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:08:19 compute-0 nova_compute[188703]:         <nova:port uuid="c26aa9e8-b157-4dd8-8c4c-2767f7a725f4">
Feb 24 16:08:19 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <system>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <entry name="serial">4ed039f2-92fd-4c07-9a3c-df2da1172e12</entry>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <entry name="uuid">4ed039f2-92fd-4c07-9a3c-df2da1172e12</entry>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     </system>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <os>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   </os>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <features>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   </features>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.config"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:1c:99:52"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <target dev="tapc26aa9e8-b1"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/console.log" append="off"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <video>
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     </video>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:08:19 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:08:19 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:08:19 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:08:19 compute-0 nova_compute[188703]: </domain>
Feb 24 16:08:19 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.914 188707 DEBUG nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Preparing to wait for external event network-vif-plugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.914 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.915 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.915 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.916 188707 DEBUG nova.virt.libvirt.vif [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1086727361',display_name='tempest-AttachInterfacesUnderV243Test-server-1086727361',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1086727361',id=7,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFPRzxrgD/mVIpbyPawYD3WAG4tYU0QjDhPzO0JVhM25FdcYSiej4ytWWlcxVnG8odOA7sUe2Mbk3XrtHW6cAicJSYxZJb3LRfc4Sq6paFTTk27LRVk6RCnMPCsWqOW0iw==',key_name='tempest-keypair-34402587',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff039f17be824e0da1015761ba1fc96a',ramdisk_id='',reservation_id='r-hc07cw81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1803128698',owner_user_name='tempest-AttachInterfacesUnderV243Test-1803128698-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:08:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40089d2ccf484a7c9ecdf03cf6fe53bb',uuid=4ed039f2-92fd-4c07-9a3c-df2da1172e12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.916 188707 DEBUG nova.network.os_vif_util [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Converting VIF {"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.917 188707 DEBUG nova.network.os_vif_util [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:99:52,bridge_name='br-int',has_traffic_filtering=True,id=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4,network=Network(ea8bd642-3dcc-421c-b6d8-009d58526417),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc26aa9e8-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.917 188707 DEBUG os_vif [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:99:52,bridge_name='br-int',has_traffic_filtering=True,id=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4,network=Network(ea8bd642-3dcc-421c-b6d8-009d58526417),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc26aa9e8-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.918 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.918 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.919 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.922 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.923 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26aa9e8-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.923 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc26aa9e8-b1, col_values=(('external_ids', {'iface-id': 'c26aa9e8-b157-4dd8-8c4c-2767f7a725f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:99:52', 'vm-uuid': '4ed039f2-92fd-4c07-9a3c-df2da1172e12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.926 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:19 compute-0 NetworkManager[56995]: <info>  [1771949299.9276] manager: (tapc26aa9e8-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.928 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.935 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:19 compute-0 nova_compute[188703]: 2026-02-24 16:08:19.936 188707 INFO os_vif [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:99:52,bridge_name='br-int',has_traffic_filtering=True,id=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4,network=Network(ea8bd642-3dcc-421c-b6d8-009d58526417),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc26aa9e8-b1')
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.001 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.002 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.002 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] No VIF found with MAC fa:16:3e:1c:99:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.002 188707 INFO nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Using config drive
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.006 188707 DEBUG nova.network.neutron [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Successfully updated port: 5d11a818-6e9b-4361-9065-15b7ad0b90cb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.023 188707 DEBUG nova.network.neutron [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Updating instance_info_cache with network_info: [{"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.036 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "refresh_cache-e99b5727-be77-4c73-a60b-26188853674c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.036 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquired lock "refresh_cache-e99b5727-be77-4c73-a60b-26188853674c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.036 188707 DEBUG nova.network.neutron [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.040 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Releasing lock "refresh_cache-14d740e5-75fa-4dec-a80f-f967c1cd1930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.041 188707 DEBUG nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Instance network_info: |[{"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.041 188707 DEBUG oslo_concurrency.lockutils [req-dba7982d-d124-45d6-bfe7-b3fe10da72f7 req-d48643ba-5b9b-4a5a-a6f3-d367babb6752 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-14d740e5-75fa-4dec-a80f-f967c1cd1930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.041 188707 DEBUG nova.network.neutron [req-dba7982d-d124-45d6-bfe7-b3fe10da72f7 req-d48643ba-5b9b-4a5a-a6f3-d367babb6752 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Refreshing network info cache for port ca1db542-80c5-48ff-a87c-ab82662c6823 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.045 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Start _get_guest_xml network_info=[{"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.065 188707 WARNING nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.075 188707 DEBUG nova.virt.libvirt.host [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.076 188707 DEBUG nova.virt.libvirt.host [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.082 188707 DEBUG nova.virt.libvirt.host [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.082 188707 DEBUG nova.virt.libvirt.host [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.083 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.083 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.083 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.084 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.084 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.084 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.085 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.085 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.085 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.086 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.086 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.086 188707 DEBUG nova.virt.hardware [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.090 188707 DEBUG nova.virt.libvirt.vif [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:08:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2029362323',display_name='tempest-ServersTestJSON-server-2029362323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2029362323',id=6,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUKNv5sASPVVbHbeNJyEv+jTZFVLgLh0UG0y7vF2Z+RF00Ms2LzT+bzZz4VmeRuEyhOSEtsx0wEQBSrjeB6gGMSTlb2QGEtdakWbMAcLEVgfuewmLKJ9ffPHeSzJdqhZQ==',key_name='tempest-keypair-2010026268',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='60a6dc6c7c8f4f06b380816ca7e12999',ramdisk_id='',reservation_id='r-xiuewpiu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-767085278',owner_user_name='tempest-ServersTestJSON-767085278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:08:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11baf60433794c759ac9aae534db1341',uuid=14d740e5-75fa-4dec-a80f-f967c1cd1930,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.090 188707 DEBUG nova.network.os_vif_util [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Converting VIF {"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.091 188707 DEBUG nova.network.os_vif_util [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:38:17,bridge_name='br-int',has_traffic_filtering=True,id=ca1db542-80c5-48ff-a87c-ab82662c6823,network=Network(85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca1db542-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.092 188707 DEBUG nova.objects.instance [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14d740e5-75fa-4dec-a80f-f967c1cd1930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.110 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <uuid>14d740e5-75fa-4dec-a80f-f967c1cd1930</uuid>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <name>instance-00000006</name>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <nova:name>tempest-ServersTestJSON-server-2029362323</nova:name>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:08:20</nova:creationTime>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:08:20 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:08:20 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:08:20 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:08:20 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:08:20 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:08:20 compute-0 nova_compute[188703]:         <nova:user uuid="11baf60433794c759ac9aae534db1341">tempest-ServersTestJSON-767085278-project-member</nova:user>
Feb 24 16:08:20 compute-0 nova_compute[188703]:         <nova:project uuid="60a6dc6c7c8f4f06b380816ca7e12999">tempest-ServersTestJSON-767085278</nova:project>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="ee41af80-6a60-4735-8135-3a06de2a36b2"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:08:20 compute-0 nova_compute[188703]:         <nova:port uuid="ca1db542-80c5-48ff-a87c-ab82662c6823">
Feb 24 16:08:20 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <system>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <entry name="serial">14d740e5-75fa-4dec-a80f-f967c1cd1930</entry>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <entry name="uuid">14d740e5-75fa-4dec-a80f-f967c1cd1930</entry>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     </system>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <os>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   </os>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <features>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   </features>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk.config"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:13:38:17"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <target dev="tapca1db542-80"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/console.log" append="off"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <video>
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     </video>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:08:20 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:08:20 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:08:20 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:08:20 compute-0 nova_compute[188703]: </domain>
Feb 24 16:08:20 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.113 188707 DEBUG nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Preparing to wait for external event network-vif-plugged-ca1db542-80c5-48ff-a87c-ab82662c6823 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.113 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.113 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.114 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.115 188707 DEBUG nova.virt.libvirt.vif [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:08:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2029362323',display_name='tempest-ServersTestJSON-server-2029362323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2029362323',id=6,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUKNv5sASPVVbHbeNJyEv+jTZFVLgLh0UG0y7vF2Z+RF00Ms2LzT+bzZz4VmeRuEyhOSEtsx0wEQBSrjeB6gGMSTlb2QGEtdakWbMAcLEVgfuewmLKJ9ffPHeSzJdqhZQ==',key_name='tempest-keypair-2010026268',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='60a6dc6c7c8f4f06b380816ca7e12999',ramdisk_id='',reservation_id='r-xiuewpiu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-767085278',owner_user_name='tempest-ServersTestJSON-767085278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:08:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11baf60433794c759ac9aae534db1341',uuid=14d740e5-75fa-4dec-a80f-f967c1cd1930,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.115 188707 DEBUG nova.network.os_vif_util [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Converting VIF {"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.116 188707 DEBUG nova.network.os_vif_util [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:38:17,bridge_name='br-int',has_traffic_filtering=True,id=ca1db542-80c5-48ff-a87c-ab82662c6823,network=Network(85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca1db542-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.117 188707 DEBUG os_vif [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:38:17,bridge_name='br-int',has_traffic_filtering=True,id=ca1db542-80c5-48ff-a87c-ab82662c6823,network=Network(85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca1db542-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.118 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.118 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.119 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.123 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.123 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca1db542-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.124 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca1db542-80, col_values=(('external_ids', {'iface-id': 'ca1db542-80c5-48ff-a87c-ab82662c6823', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:38:17', 'vm-uuid': '14d740e5-75fa-4dec-a80f-f967c1cd1930'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.128 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:08:20 compute-0 NetworkManager[56995]: <info>  [1771949300.1319] manager: (tapca1db542-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.141 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.143 188707 INFO os_vif [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:38:17,bridge_name='br-int',has_traffic_filtering=True,id=ca1db542-80c5-48ff-a87c-ab82662c6823,network=Network(85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca1db542-80')
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.186 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.186 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.186 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] No VIF found with MAC fa:16:3e:13:38:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.187 188707 INFO nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Using config drive
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.258 188707 DEBUG nova.network.neutron [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.593 188707 INFO nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Creating config drive at /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.config
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.601 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphm_iidbg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.729 188707 DEBUG oslo_concurrency.processutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphm_iidbg" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:20 compute-0 kernel: tapc26aa9e8-b1: entered promiscuous mode
Feb 24 16:08:20 compute-0 NetworkManager[56995]: <info>  [1771949300.8013] manager: (tapc26aa9e8-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.805 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:20 compute-0 ovn_controller[98701]: 2026-02-24T16:08:20Z|00066|binding|INFO|Claiming lport c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 for this chassis.
Feb 24 16:08:20 compute-0 ovn_controller[98701]: 2026-02-24T16:08:20Z|00067|binding|INFO|c26aa9e8-b157-4dd8-8c4c-2767f7a725f4: Claiming fa:16:3e:1c:99:52 10.100.0.7
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.815 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:99:52 10.100.0.7'], port_security=['fa:16:3e:1c:99:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ed039f2-92fd-4c07-9a3c-df2da1172e12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea8bd642-3dcc-421c-b6d8-009d58526417', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff039f17be824e0da1015761ba1fc96a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c7d6a6a-c6cc-4a92-80e8-048801b99214', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=300455a1-55a1-4123-af9e-3289e53c8820, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.818 108026 INFO neutron.agent.ovn.metadata.agent [-] Port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 in datapath ea8bd642-3dcc-421c-b6d8-009d58526417 bound to our chassis
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.823 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea8bd642-3dcc-421c-b6d8-009d58526417
Feb 24 16:08:20 compute-0 ovn_controller[98701]: 2026-02-24T16:08:20Z|00068|binding|INFO|Setting lport c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 ovn-installed in OVS
Feb 24 16:08:20 compute-0 ovn_controller[98701]: 2026-02-24T16:08:20Z|00069|binding|INFO|Setting lport c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 up in Southbound
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.837 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.836 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed0a241-59ce-44c1-9f18-ab5607125d9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.837 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapea8bd642-31 in ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.840 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapea8bd642-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.840 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[886c0a34-17fa-4a45-8a57-9c3e6da70369]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.842 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[6f966d7c-0f22-42c3-8d1b-c99f306671e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 nova_compute[188703]: 2026-02-24 16:08:20.845 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:20 compute-0 systemd-machined[158049]: New machine qemu-6-instance-00000007.
Feb 24 16:08:20 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000007.
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.857 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e72ef8-6a4d-4607-8096-4ad533168243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 systemd-udevd[252184]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.886 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[a5149c1b-8584-4a64-87fe-57bc80d6eb84]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 NetworkManager[56995]: <info>  [1771949300.8899] device (tapc26aa9e8-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:08:20 compute-0 NetworkManager[56995]: <info>  [1771949300.8960] device (tapc26aa9e8-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.908 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[2eef09d3-deaa-4f5f-821f-8ae872860349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.913 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[14c85f15-a258-4c01-aecd-68d877ce00fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 NetworkManager[56995]: <info>  [1771949300.9169] manager: (tapea8bd642-30): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.940 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[4afad00e-570d-4c5e-a9bd-cb7d2718a5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.943 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[1c69e6fa-50ea-41b7-9eb0-62f54f584fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 NetworkManager[56995]: <info>  [1771949300.9647] device (tapea8bd642-30): carrier: link connected
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.968 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a63477-8431-468a-aef7-151e19dad5f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.982 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[fdde7cd6-4d42-4f20-a22d-8f238227f80b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea8bd642-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:01:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501346, 'reachable_time': 19319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252217, 'error': None, 'target': 'ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:20.994 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[dce34aa7-74e2-43b5-96d6-ba9c5a57f94e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:1c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501346, 'tstamp': 501346}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252218, 'error': None, 'target': 'ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.007 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cb45de-a30a-4d57-a525-c30eb9bbc37c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea8bd642-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:01:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501346, 'reachable_time': 19319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252219, 'error': None, 'target': 'ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.028 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[21e6a877-7f7b-4280-8cd3-0483795141b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.087 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[657c6430-0826-4674-ad68-006c81fce916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.089 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea8bd642-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.090 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.091 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea8bd642-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:21 compute-0 kernel: tapea8bd642-30: entered promiscuous mode
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.094 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:21 compute-0 NetworkManager[56995]: <info>  [1771949301.0985] manager: (tapea8bd642-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.105 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.113 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea8bd642-30, col_values=(('external_ids', {'iface-id': '262cdbf1-c669-4983-b196-f68920cf4249'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:21 compute-0 ovn_controller[98701]: 2026-02-24T16:08:21Z|00070|binding|INFO|Releasing lport 262cdbf1-c669-4983-b196-f68920cf4249 from this chassis (sb_readonly=0)
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.117 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.124 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.125 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ea8bd642-3dcc-421c-b6d8-009d58526417.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ea8bd642-3dcc-421c-b6d8-009d58526417.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.126 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[0760c918-b88f-4da8-910f-27ae662497e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.127 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: global
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-ea8bd642-3dcc-421c-b6d8-009d58526417
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/ea8bd642-3dcc-421c-b6d8-009d58526417.pid.haproxy
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID ea8bd642-3dcc-421c-b6d8-009d58526417
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.128 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417', 'env', 'PROCESS_TAG=haproxy-ea8bd642-3dcc-421c-b6d8-009d58526417', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ea8bd642-3dcc-421c-b6d8-009d58526417.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.268 188707 INFO nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Creating config drive at /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk.config
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.273 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfnr458b7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:21 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 24 16:08:21 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.357 188707 DEBUG nova.compute.manager [req-2dbbe767-2f81-47a5-8983-7030859eea19 req-3086d0e7-8d83-4883-9e4b-a380df7c58bb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Received event network-changed-5d11a818-6e9b-4361-9065-15b7ad0b90cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.358 188707 DEBUG nova.compute.manager [req-2dbbe767-2f81-47a5-8983-7030859eea19 req-3086d0e7-8d83-4883-9e4b-a380df7c58bb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Refreshing instance network info cache due to event network-changed-5d11a818-6e9b-4361-9065-15b7ad0b90cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.359 188707 DEBUG oslo_concurrency.lockutils [req-2dbbe767-2f81-47a5-8983-7030859eea19 req-3086d0e7-8d83-4883-9e4b-a380df7c58bb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-e99b5727-be77-4c73-a60b-26188853674c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.389 188707 DEBUG oslo_concurrency.processutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfnr458b7" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:21 compute-0 kernel: tapca1db542-80: entered promiscuous mode
Feb 24 16:08:21 compute-0 ovn_controller[98701]: 2026-02-24T16:08:21Z|00071|binding|INFO|Claiming lport ca1db542-80c5-48ff-a87c-ab82662c6823 for this chassis.
Feb 24 16:08:21 compute-0 ovn_controller[98701]: 2026-02-24T16:08:21Z|00072|binding|INFO|ca1db542-80c5-48ff-a87c-ab82662c6823: Claiming fa:16:3e:13:38:17 10.100.0.14
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.443 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:21 compute-0 NetworkManager[56995]: <info>  [1771949301.4469] manager: (tapca1db542-80): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Feb 24 16:08:21 compute-0 systemd-udevd[252200]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.452 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.453 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:38:17 10.100.0.14'], port_security=['fa:16:3e:13:38:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '14d740e5-75fa-4dec-a80f-f967c1cd1930', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '60a6dc6c7c8f4f06b380816ca7e12999', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e59ac58d-8840-40ec-8ba3-0e4511f74482', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=055b9dea-00a9-4cec-937b-00bb57e166e7, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=ca1db542-80c5-48ff-a87c-ab82662c6823) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:08:21 compute-0 ovn_controller[98701]: 2026-02-24T16:08:21Z|00073|binding|INFO|Setting lport ca1db542-80c5-48ff-a87c-ab82662c6823 ovn-installed in OVS
Feb 24 16:08:21 compute-0 ovn_controller[98701]: 2026-02-24T16:08:21Z|00074|binding|INFO|Setting lport ca1db542-80c5-48ff-a87c-ab82662c6823 up in Southbound
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.458 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:21 compute-0 NetworkManager[56995]: <info>  [1771949301.4674] device (tapca1db542-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:08:21 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000006.
Feb 24 16:08:21 compute-0 systemd-machined[158049]: New machine qemu-7-instance-00000006.
Feb 24 16:08:21 compute-0 NetworkManager[56995]: <info>  [1771949301.4864] device (tapca1db542-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.568 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949301.5677161, 4ed039f2-92fd-4c07-9a3c-df2da1172e12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.570 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] VM Started (Lifecycle Event)
Feb 24 16:08:21 compute-0 podman[252298]: 2026-02-24 16:08:21.575043217 +0000 UTC m=+0.064131837 container create 835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.614 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.620 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949301.5678935, 4ed039f2-92fd-4c07-9a3c-df2da1172e12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.621 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] VM Paused (Lifecycle Event)
Feb 24 16:08:21 compute-0 systemd[1]: Started libpod-conmon-835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0.scope.
Feb 24 16:08:21 compute-0 podman[252298]: 2026-02-24 16:08:21.541521468 +0000 UTC m=+0.030610138 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.651 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:21 compute-0 systemd[1]: Started libcrun container.
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.659 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771cf174f295ee04ade9366d9b9c8fe0dd382e41789f97d68202d79701c8d3d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 16:08:21 compute-0 podman[252298]: 2026-02-24 16:08:21.697715935 +0000 UTC m=+0.186804645 container init 835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 24 16:08:21 compute-0 podman[252298]: 2026-02-24 16:08:21.703597498 +0000 UTC m=+0.192686158 container start 835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 16:08:21 compute-0 neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417[252320]: [NOTICE]   (252324) : New worker (252326) forked
Feb 24 16:08:21 compute-0 neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417[252320]: [NOTICE]   (252324) : Loading success.
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.732 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.769 108026 INFO neutron.agent.ovn.metadata.agent [-] Port ca1db542-80c5-48ff-a87c-ab82662c6823 in datapath 85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b unbound from our chassis
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.771 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.777 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[15fb8abd-4302-4689-927d-e6d961f959aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.778 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85731750-01 in ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.780 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85731750-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.780 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6d8176-f8ef-4099-aaee-90d9131e51bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.781 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[8da76026-524e-4294-816e-15728e74b747]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.792 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[2da4ab52-8eb1-49d3-8c7f-b9e2c9a65007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.802 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[a491046f-c46f-4e0b-9a22-aa682d6ebe8e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.826 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff93d06-0e75-4070-8ae7-c1c7cf398788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.832 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ade38c54-7af9-4ce4-902f-bf3aeb51992f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 NetworkManager[56995]: <info>  [1771949301.8339] manager: (tap85731750-00): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.863 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949301.8635008, 14d740e5-75fa-4dec-a80f-f967c1cd1930 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.863 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] VM Started (Lifecycle Event)
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.866 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[296fe1cc-91f4-499d-84ed-69793c081664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.870 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[adad017b-7bb8-402b-98c6-76b2b5de0766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 NetworkManager[56995]: <info>  [1771949301.8938] device (tap85731750-00): carrier: link connected
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.895 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.900 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949301.8635473, 14d740e5-75fa-4dec-a80f-f967c1cd1930 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.900 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] VM Paused (Lifecycle Event)
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.900 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[7bece22b-1f49-4587-8882-af3f5699f70b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.916 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[8be48306-5731-4578-a7fe-4e60d1667548]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85731750-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:f6:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501439, 'reachable_time': 15555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252352, 'error': None, 'target': 'ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.931 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[13ff0be7-41d2-478b-bdb5-4d138946c439]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:f60a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501439, 'tstamp': 501439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252353, 'error': None, 'target': 'ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.934 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.938 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.946 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[87fb7c5b-5861-49d6-9a36-9dd170c523c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85731750-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:f6:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501439, 'reachable_time': 15555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252354, 'error': None, 'target': 'ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:21 compute-0 nova_compute[188703]: 2026-02-24 16:08:21.961 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:08:21 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:21.982 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[d06c35d9-0a09-4f93-aaf1-df29ec6f2b63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:22.036 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[692d3d36-6bd6-450b-aa6f-8c1dc8c24613]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:22.038 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85731750-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:22.039 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:22.039 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85731750-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:22 compute-0 kernel: tap85731750-00: entered promiscuous mode
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.041 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:22 compute-0 NetworkManager[56995]: <info>  [1771949302.0431] manager: (tap85731750-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.044 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:22.049 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85731750-00, col_values=(('external_ids', {'iface-id': 'fff73773-7e67-49a8-8c12-c1ecf0743a0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.051 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.053 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:22 compute-0 ovn_controller[98701]: 2026-02-24T16:08:22Z|00075|binding|INFO|Releasing lport fff73773-7e67-49a8-8c12-c1ecf0743a0a from this chassis (sb_readonly=0)
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:22.055 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:22.056 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8ccced-a6a0-4351-998a-959c8aa66706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:22.057 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: global
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b.pid.haproxy
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID 85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 16:08:22 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:22.058 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b', 'env', 'PROCESS_TAG=haproxy-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.061 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.397 188707 DEBUG nova.network.neutron [req-dba7982d-d124-45d6-bfe7-b3fe10da72f7 req-d48643ba-5b9b-4a5a-a6f3-d367babb6752 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Updated VIF entry in instance network info cache for port ca1db542-80c5-48ff-a87c-ab82662c6823. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.397 188707 DEBUG nova.network.neutron [req-dba7982d-d124-45d6-bfe7-b3fe10da72f7 req-d48643ba-5b9b-4a5a-a6f3-d367babb6752 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Updating instance_info_cache with network_info: [{"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.415 188707 DEBUG oslo_concurrency.lockutils [req-dba7982d-d124-45d6-bfe7-b3fe10da72f7 req-d48643ba-5b9b-4a5a-a6f3-d367babb6752 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-14d740e5-75fa-4dec-a80f-f967c1cd1930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:08:22 compute-0 podman[252384]: 2026-02-24 16:08:22.469460713 +0000 UTC m=+0.094485438 container create 9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:08:22 compute-0 podman[252384]: 2026-02-24 16:08:22.414895931 +0000 UTC m=+0.039920716 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 16:08:22 compute-0 systemd[1]: Started libpod-conmon-9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f.scope.
Feb 24 16:08:22 compute-0 systemd[1]: Started libcrun container.
Feb 24 16:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f0e19fa1a647ff2172beee0193bfbd5242c7358ae198cc9c0bfd15d9694aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 16:08:22 compute-0 podman[252384]: 2026-02-24 16:08:22.587730269 +0000 UTC m=+0.212754954 container init 9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:08:22 compute-0 podman[252384]: 2026-02-24 16:08:22.594725753 +0000 UTC m=+0.219750468 container start 9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 24 16:08:22 compute-0 neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b[252399]: [NOTICE]   (252403) : New worker (252405) forked
Feb 24 16:08:22 compute-0 neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b[252399]: [NOTICE]   (252403) : Loading success.
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.665 188707 DEBUG nova.network.neutron [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Updating instance_info_cache with network_info: [{"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.709 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Releasing lock "refresh_cache-e99b5727-be77-4c73-a60b-26188853674c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.710 188707 DEBUG nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Instance network_info: |[{"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.710 188707 DEBUG oslo_concurrency.lockutils [req-2dbbe767-2f81-47a5-8983-7030859eea19 req-3086d0e7-8d83-4883-9e4b-a380df7c58bb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-e99b5727-be77-4c73-a60b-26188853674c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.710 188707 DEBUG nova.network.neutron [req-2dbbe767-2f81-47a5-8983-7030859eea19 req-3086d0e7-8d83-4883-9e4b-a380df7c58bb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Refreshing network info cache for port 5d11a818-6e9b-4361-9065-15b7ad0b90cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.714 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Start _get_guest_xml network_info=[{"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.722 188707 WARNING nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.729 188707 DEBUG nova.virt.libvirt.host [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.730 188707 DEBUG nova.virt.libvirt.host [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.738 188707 DEBUG nova.virt.libvirt.host [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.738 188707 DEBUG nova.virt.libvirt.host [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.739 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.739 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.740 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.740 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.740 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.740 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.740 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.741 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.741 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.741 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.742 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.742 188707 DEBUG nova.virt.hardware [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.746 188707 DEBUG nova.virt.libvirt.vif [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1722877156',display_name='tempest-ServerAddressesTestJSON-server-1722877156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1722877156',id=8,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8b20224ac879431e8ca256556525e6fd',ramdisk_id='',reservation_id='r-oar5k54k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-791689798',owner_user_name='tempest-ServerAddressesTestJSON-791689798-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:08:16Z,user_data=None,user_id='5f4c6a77687e48f2b855bd5e6fb8bb86',uuid=e99b5727-be77-4c73-a60b-26188853674c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.747 188707 DEBUG nova.network.os_vif_util [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Converting VIF {"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.747 188707 DEBUG nova.network.os_vif_util [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:fd:4e,bridge_name='br-int',has_traffic_filtering=True,id=5d11a818-6e9b-4361-9065-15b7ad0b90cb,network=Network(cc847011-9955-49b2-86ae-9ffa2a2ca759),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d11a818-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.748 188707 DEBUG nova.objects.instance [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lazy-loading 'pci_devices' on Instance uuid e99b5727-be77-4c73-a60b-26188853674c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.763 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <uuid>e99b5727-be77-4c73-a60b-26188853674c</uuid>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <name>instance-00000008</name>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <nova:name>tempest-ServerAddressesTestJSON-server-1722877156</nova:name>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:08:22</nova:creationTime>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:08:22 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:08:22 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:08:22 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:08:22 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:08:22 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:08:22 compute-0 nova_compute[188703]:         <nova:user uuid="5f4c6a77687e48f2b855bd5e6fb8bb86">tempest-ServerAddressesTestJSON-791689798-project-member</nova:user>
Feb 24 16:08:22 compute-0 nova_compute[188703]:         <nova:project uuid="8b20224ac879431e8ca256556525e6fd">tempest-ServerAddressesTestJSON-791689798</nova:project>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="ee41af80-6a60-4735-8135-3a06de2a36b2"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:08:22 compute-0 nova_compute[188703]:         <nova:port uuid="5d11a818-6e9b-4361-9065-15b7ad0b90cb">
Feb 24 16:08:22 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <system>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <entry name="serial">e99b5727-be77-4c73-a60b-26188853674c</entry>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <entry name="uuid">e99b5727-be77-4c73-a60b-26188853674c</entry>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     </system>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <os>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   </os>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <features>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   </features>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk.config"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:b3:fd:4e"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <target dev="tap5d11a818-6e"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/console.log" append="off"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <video>
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     </video>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:08:22 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:08:22 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:08:22 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:08:22 compute-0 nova_compute[188703]: </domain>
Feb 24 16:08:22 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.763 188707 DEBUG nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Preparing to wait for external event network-vif-plugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.764 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "e99b5727-be77-4c73-a60b-26188853674c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.764 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.764 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.765 188707 DEBUG nova.virt.libvirt.vif [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1722877156',display_name='tempest-ServerAddressesTestJSON-server-1722877156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1722877156',id=8,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8b20224ac879431e8ca256556525e6fd',ramdisk_id='',reservation_id='r-oar5k54k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-791689798',owner_user_name='tempest-ServerAddressesTestJSON-791689798-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:08:16Z,user_data=None,user_id='5f4c6a77687e48f2b855bd5e6fb8bb86',uuid=e99b5727-be77-4c73-a60b-26188853674c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.765 188707 DEBUG nova.network.os_vif_util [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Converting VIF {"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.766 188707 DEBUG nova.network.os_vif_util [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:fd:4e,bridge_name='br-int',has_traffic_filtering=True,id=5d11a818-6e9b-4361-9065-15b7ad0b90cb,network=Network(cc847011-9955-49b2-86ae-9ffa2a2ca759),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d11a818-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.766 188707 DEBUG os_vif [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:fd:4e,bridge_name='br-int',has_traffic_filtering=True,id=5d11a818-6e9b-4361-9065-15b7ad0b90cb,network=Network(cc847011-9955-49b2-86ae-9ffa2a2ca759),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d11a818-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.767 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.767 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.768 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.771 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.771 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d11a818-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.772 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d11a818-6e, col_values=(('external_ids', {'iface-id': '5d11a818-6e9b-4361-9065-15b7ad0b90cb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:fd:4e', 'vm-uuid': 'e99b5727-be77-4c73-a60b-26188853674c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.774 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:22 compute-0 NetworkManager[56995]: <info>  [1771949302.7750] manager: (tap5d11a818-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.775 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.783 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.784 188707 INFO os_vif [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:fd:4e,bridge_name='br-int',has_traffic_filtering=True,id=5d11a818-6e9b-4361-9065-15b7ad0b90cb,network=Network(cc847011-9955-49b2-86ae-9ffa2a2ca759),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d11a818-6e')
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.847 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.847 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.848 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] No VIF found with MAC fa:16:3e:b3:fd:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.848 188707 INFO nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Using config drive
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.927 188707 DEBUG nova.network.neutron [req-57f05716-da8b-4d2b-bddb-f95f74e2fc24 req-b654693b-2c7d-4845-b5e5-c75bb57c2d0c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updated VIF entry in instance network info cache for port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.927 188707 DEBUG nova.network.neutron [req-57f05716-da8b-4d2b-bddb-f95f74e2fc24 req-b654693b-2c7d-4845-b5e5-c75bb57c2d0c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updating instance_info_cache with network_info: [{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:22 compute-0 nova_compute[188703]: 2026-02-24 16:08:22.951 188707 DEBUG oslo_concurrency.lockutils [req-57f05716-da8b-4d2b-bddb-f95f74e2fc24 req-b654693b-2c7d-4845-b5e5-c75bb57c2d0c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.384 188707 INFO nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Creating config drive at /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk.config
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.388 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpabobmfio execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.463 188707 DEBUG nova.compute.manager [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-vif-plugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.463 188707 DEBUG oslo_concurrency.lockutils [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.463 188707 DEBUG oslo_concurrency.lockutils [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.463 188707 DEBUG oslo_concurrency.lockutils [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.464 188707 DEBUG nova.compute.manager [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Processing event network-vif-plugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.464 188707 DEBUG nova.compute.manager [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-vif-plugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.464 188707 DEBUG oslo_concurrency.lockutils [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.464 188707 DEBUG oslo_concurrency.lockutils [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.464 188707 DEBUG oslo_concurrency.lockutils [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.464 188707 DEBUG nova.compute.manager [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] No waiting events found dispatching network-vif-plugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.465 188707 WARNING nova.compute.manager [req-a4a13a0f-c59f-4bcd-b379-8cbc4a4eb057 req-51996a8b-fc28-4e21-aac4-71605367cfa8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received unexpected event network-vif-plugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 for instance with vm_state building and task_state spawning.
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.465 188707 DEBUG nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.470 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949303.4699671, 4ed039f2-92fd-4c07-9a3c-df2da1172e12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.470 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] VM Resumed (Lifecycle Event)
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.472 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.474 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.481 188707 INFO nova.virt.libvirt.driver [-] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Instance spawned successfully.
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.481 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.486 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.493 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.503 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.504 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.504 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.505 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.505 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.506 188707 DEBUG nova.virt.libvirt.driver [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.510 188707 DEBUG oslo_concurrency.processutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpabobmfio" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.516 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:08:23 compute-0 NetworkManager[56995]: <info>  [1771949303.5719] manager: (tap5d11a818-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Feb 24 16:08:23 compute-0 kernel: tap5d11a818-6e: entered promiscuous mode
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.572 188707 INFO nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Took 9.48 seconds to spawn the instance on the hypervisor.
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.578 188707 DEBUG nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.579 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:23 compute-0 ovn_controller[98701]: 2026-02-24T16:08:23Z|00076|binding|INFO|Claiming lport 5d11a818-6e9b-4361-9065-15b7ad0b90cb for this chassis.
Feb 24 16:08:23 compute-0 ovn_controller[98701]: 2026-02-24T16:08:23Z|00077|binding|INFO|5d11a818-6e9b-4361-9065-15b7ad0b90cb: Claiming fa:16:3e:b3:fd:4e 10.100.0.13
Feb 24 16:08:23 compute-0 NetworkManager[56995]: <info>  [1771949303.5852] device (tap5d11a818-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:08:23 compute-0 NetworkManager[56995]: <info>  [1771949303.5862] device (tap5d11a818-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.588 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:fd:4e 10.100.0.13'], port_security=['fa:16:3e:b3:fd:4e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e99b5727-be77-4c73-a60b-26188853674c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc847011-9955-49b2-86ae-9ffa2a2ca759', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b20224ac879431e8ca256556525e6fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9255168-10aa-492a-b407-feeaa975969c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b77068ac-63eb-4c40-a634-0fd66547868a, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=5d11a818-6e9b-4361-9065-15b7ad0b90cb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:08:23 compute-0 ovn_controller[98701]: 2026-02-24T16:08:23Z|00078|binding|INFO|Setting lport 5d11a818-6e9b-4361-9065-15b7ad0b90cb ovn-installed in OVS
Feb 24 16:08:23 compute-0 ovn_controller[98701]: 2026-02-24T16:08:23Z|00079|binding|INFO|Setting lport 5d11a818-6e9b-4361-9065-15b7ad0b90cb up in Southbound
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.592 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.596 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.597 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 5d11a818-6e9b-4361-9065-15b7ad0b90cb in datapath cc847011-9955-49b2-86ae-9ffa2a2ca759 bound to our chassis
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.601 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cc847011-9955-49b2-86ae-9ffa2a2ca759
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.613 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[96f8e98b-cbca-448f-88fc-a517d807af05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.614 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcc847011-91 in ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.616 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcc847011-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.616 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[5139af12-7ef6-4b0a-b848-7ee6257cb6ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.618 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[9165d028-e080-407a-b137-61bb1c9ae46e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 systemd-machined[158049]: New machine qemu-8-instance-00000008.
Feb 24 16:08:23 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.634 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff132d2-287c-45e0-8914-3973053a899d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.660 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[874eb6eb-7cf9-49f8-b640-d5d1af6b6eac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.662 188707 INFO nova.compute.manager [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Took 10.21 seconds to build instance.
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.689 188707 DEBUG oslo_concurrency.lockutils [None req-035cb594-08bf-4fba-b4e1-51aa5b26b097 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.699 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[09813169-fee8-42fb-bcd5-c94e588eba0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.710 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[6c4dbac4-19c0-4cde-bb2c-3133bf33acf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 NetworkManager[56995]: <info>  [1771949303.7113] manager: (tapcc847011-90): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.737 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[adf9b817-05f9-4f43-a8c7-ed5a6bcb1038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.740 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[f3067035-7e9a-44a8-ba26-f0fbc82dbe07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 NetworkManager[56995]: <info>  [1771949303.7590] device (tapcc847011-90): carrier: link connected
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.764 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[a0276134-7c84-4545-8b43-4cbade27c7f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.777 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2edd9076-58a8-4863-95de-808988976d69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc847011-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:b3:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501626, 'reachable_time': 40751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252453, 'error': None, 'target': 'ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.791 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[5d093275-e39b-4fe4-9d80-5a35f1190a20]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:b381'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501626, 'tstamp': 501626}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252454, 'error': None, 'target': 'ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.805 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[65e2a820-f554-4eb3-94e3-1f7193dbec67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc847011-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:b3:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501626, 'reachable_time': 40751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252455, 'error': None, 'target': 'ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.827 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[8340fe67-ac07-4876-b4ea-d4f0eb1f9dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.881 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[1a766f17-d7d5-4a8b-9a95-a1d9b128b665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.884 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc847011-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.884 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.885 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc847011-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:23 compute-0 kernel: tapcc847011-90: entered promiscuous mode
Feb 24 16:08:23 compute-0 NetworkManager[56995]: <info>  [1771949303.8879] manager: (tapcc847011-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.888 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.895 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcc847011-90, col_values=(('external_ids', {'iface-id': 'bbb86f69-6a0d-4c80-9f5b-6d6d0186e0e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:23 compute-0 ovn_controller[98701]: 2026-02-24T16:08:23Z|00080|binding|INFO|Releasing lport bbb86f69-6a0d-4c80-9f5b-6d6d0186e0e7 from this chassis (sb_readonly=0)
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.897 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.900 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cc847011-9955-49b2-86ae-9ffa2a2ca759.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cc847011-9955-49b2-86ae-9ffa2a2ca759.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 16:08:23 compute-0 nova_compute[188703]: 2026-02-24 16:08:23.902 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.903 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc44df3-fa6f-4004-9297-f05473077e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.904 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: global
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-cc847011-9955-49b2-86ae-9ffa2a2ca759
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/cc847011-9955-49b2-86ae-9ffa2a2ca759.pid.haproxy
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID cc847011-9955-49b2-86ae-9ffa2a2ca759
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 16:08:23 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:23.905 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759', 'env', 'PROCESS_TAG=haproxy-cc847011-9955-49b2-86ae-9ffa2a2ca759', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cc847011-9955-49b2-86ae-9ffa2a2ca759.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 16:08:24 compute-0 nova_compute[188703]: 2026-02-24 16:08:24.083 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949304.0829163, e99b5727-be77-4c73-a60b-26188853674c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:24 compute-0 nova_compute[188703]: 2026-02-24 16:08:24.084 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] VM Started (Lifecycle Event)
Feb 24 16:08:24 compute-0 nova_compute[188703]: 2026-02-24 16:08:24.114 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:24 compute-0 nova_compute[188703]: 2026-02-24 16:08:24.119 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949304.0832095, e99b5727-be77-4c73-a60b-26188853674c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:24 compute-0 nova_compute[188703]: 2026-02-24 16:08:24.119 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] VM Paused (Lifecycle Event)
Feb 24 16:08:24 compute-0 nova_compute[188703]: 2026-02-24 16:08:24.141 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:24 compute-0 nova_compute[188703]: 2026-02-24 16:08:24.145 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:08:24 compute-0 nova_compute[188703]: 2026-02-24 16:08:24.165 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:08:24 compute-0 podman[252496]: 2026-02-24 16:08:24.274914676 +0000 UTC m=+0.068094858 container create 5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:08:24 compute-0 systemd[1]: Started libpod-conmon-5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7.scope.
Feb 24 16:08:24 compute-0 podman[252496]: 2026-02-24 16:08:24.239489744 +0000 UTC m=+0.032669946 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 16:08:24 compute-0 systemd[1]: Started libcrun container.
Feb 24 16:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a326d92ace47a2f272d25f1cd0058e45649e16431536aea170424ddff9d9782/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 16:08:24 compute-0 podman[252496]: 2026-02-24 16:08:24.362306796 +0000 UTC m=+0.155486978 container init 5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 16:08:24 compute-0 podman[252496]: 2026-02-24 16:08:24.370576995 +0000 UTC m=+0.163757177 container start 5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:08:24 compute-0 neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759[252514]: [NOTICE]   (252529) : New worker (252533) forked
Feb 24 16:08:24 compute-0 neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759[252514]: [NOTICE]   (252529) : Loading success.
Feb 24 16:08:24 compute-0 podman[252506]: 2026-02-24 16:08:24.408261499 +0000 UTC m=+0.097766279 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, release=1770267347, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.066 188707 DEBUG nova.network.neutron [req-2dbbe767-2f81-47a5-8983-7030859eea19 req-3086d0e7-8d83-4883-9e4b-a380df7c58bb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Updated VIF entry in instance network info cache for port 5d11a818-6e9b-4361-9065-15b7ad0b90cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.068 188707 DEBUG nova.network.neutron [req-2dbbe767-2f81-47a5-8983-7030859eea19 req-3086d0e7-8d83-4883-9e4b-a380df7c58bb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Updating instance_info_cache with network_info: [{"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.091 188707 DEBUG oslo_concurrency.lockutils [req-2dbbe767-2f81-47a5-8983-7030859eea19 req-3086d0e7-8d83-4883-9e4b-a380df7c58bb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-e99b5727-be77-4c73-a60b-26188853674c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.310 188707 DEBUG nova.compute.manager [req-198609c7-5475-44be-841a-7a9f3399f0c4 req-b6271343-435d-4bc2-9de6-5069cee46965 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received event network-vif-plugged-ca1db542-80c5-48ff-a87c-ab82662c6823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.311 188707 DEBUG oslo_concurrency.lockutils [req-198609c7-5475-44be-841a-7a9f3399f0c4 req-b6271343-435d-4bc2-9de6-5069cee46965 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.311 188707 DEBUG oslo_concurrency.lockutils [req-198609c7-5475-44be-841a-7a9f3399f0c4 req-b6271343-435d-4bc2-9de6-5069cee46965 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.312 188707 DEBUG oslo_concurrency.lockutils [req-198609c7-5475-44be-841a-7a9f3399f0c4 req-b6271343-435d-4bc2-9de6-5069cee46965 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.312 188707 DEBUG nova.compute.manager [req-198609c7-5475-44be-841a-7a9f3399f0c4 req-b6271343-435d-4bc2-9de6-5069cee46965 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Processing event network-vif-plugged-ca1db542-80c5-48ff-a87c-ab82662c6823 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.313 188707 DEBUG nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.318 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.321 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949305.3203602, 14d740e5-75fa-4dec-a80f-f967c1cd1930 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.322 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] VM Resumed (Lifecycle Event)
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.328 188707 INFO nova.virt.libvirt.driver [-] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Instance spawned successfully.
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.330 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.347 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.357 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.361 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.362 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.363 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.364 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.365 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.366 188707 DEBUG nova.virt.libvirt.driver [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.400 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.434 188707 INFO nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Took 12.75 seconds to spawn the instance on the hypervisor.
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.435 188707 DEBUG nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.512 188707 INFO nova.compute.manager [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Took 13.35 seconds to build instance.
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.543 188707 DEBUG oslo_concurrency.lockutils [None req-c34ebda0-24b5-4512-a4b0-0f15dcda2b2b 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.864 188707 DEBUG nova.compute.manager [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Received event network-vif-plugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.865 188707 DEBUG oslo_concurrency.lockutils [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e99b5727-be77-4c73-a60b-26188853674c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.865 188707 DEBUG oslo_concurrency.lockutils [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.866 188707 DEBUG oslo_concurrency.lockutils [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.866 188707 DEBUG nova.compute.manager [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Processing event network-vif-plugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.867 188707 DEBUG nova.compute.manager [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Received event network-vif-plugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.868 188707 DEBUG oslo_concurrency.lockutils [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e99b5727-be77-4c73-a60b-26188853674c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.868 188707 DEBUG oslo_concurrency.lockutils [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.869 188707 DEBUG oslo_concurrency.lockutils [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.869 188707 DEBUG nova.compute.manager [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] No waiting events found dispatching network-vif-plugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.870 188707 WARNING nova.compute.manager [req-7cc191d4-6dec-49ef-afca-f157d8d0319e req-c6cf14af-43cf-44ef-a5d1-817c53e8d66f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Received unexpected event network-vif-plugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb for instance with vm_state building and task_state spawning.
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.871 188707 DEBUG nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.875 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949305.8749704, e99b5727-be77-4c73-a60b-26188853674c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.875 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] VM Resumed (Lifecycle Event)
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.886 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.891 188707 INFO nova.virt.libvirt.driver [-] [instance: e99b5727-be77-4c73-a60b-26188853674c] Instance spawned successfully.
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.892 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.895 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.900 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.912 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.913 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.913 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.914 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.915 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.916 188707 DEBUG nova.virt.libvirt.driver [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.920 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.995 188707 INFO nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Took 9.72 seconds to spawn the instance on the hypervisor.
Feb 24 16:08:25 compute-0 nova_compute[188703]: 2026-02-24 16:08:25.996 188707 DEBUG nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:26 compute-0 nova_compute[188703]: 2026-02-24 16:08:26.105 188707 INFO nova.compute.manager [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Took 10.60 seconds to build instance.
Feb 24 16:08:26 compute-0 nova_compute[188703]: 2026-02-24 16:08:26.198 188707 DEBUG oslo_concurrency.lockutils [None req-7a00166a-77ab-4066-a447-80c3a0a6f174 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:27 compute-0 ovn_controller[98701]: 2026-02-24T16:08:27Z|00081|binding|INFO|Releasing lport 262cdbf1-c669-4983-b196-f68920cf4249 from this chassis (sb_readonly=0)
Feb 24 16:08:27 compute-0 ovn_controller[98701]: 2026-02-24T16:08:27Z|00082|binding|INFO|Releasing lport fff73773-7e67-49a8-8c12-c1ecf0743a0a from this chassis (sb_readonly=0)
Feb 24 16:08:27 compute-0 ovn_controller[98701]: 2026-02-24T16:08:27Z|00083|binding|INFO|Releasing lport bbb86f69-6a0d-4c80-9f5b-6d6d0186e0e7 from this chassis (sb_readonly=0)
Feb 24 16:08:27 compute-0 nova_compute[188703]: 2026-02-24 16:08:27.340 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:27 compute-0 nova_compute[188703]: 2026-02-24 16:08:27.499 188707 DEBUG nova.compute.manager [req-7fdc0c10-b0e8-4701-9edc-44919b06e131 req-96c068c5-963d-48a4-a338-8d4d955399ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received event network-vif-plugged-ca1db542-80c5-48ff-a87c-ab82662c6823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:27 compute-0 nova_compute[188703]: 2026-02-24 16:08:27.500 188707 DEBUG oslo_concurrency.lockutils [req-7fdc0c10-b0e8-4701-9edc-44919b06e131 req-96c068c5-963d-48a4-a338-8d4d955399ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:27 compute-0 nova_compute[188703]: 2026-02-24 16:08:27.500 188707 DEBUG oslo_concurrency.lockutils [req-7fdc0c10-b0e8-4701-9edc-44919b06e131 req-96c068c5-963d-48a4-a338-8d4d955399ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:27 compute-0 nova_compute[188703]: 2026-02-24 16:08:27.501 188707 DEBUG oslo_concurrency.lockutils [req-7fdc0c10-b0e8-4701-9edc-44919b06e131 req-96c068c5-963d-48a4-a338-8d4d955399ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:27 compute-0 nova_compute[188703]: 2026-02-24 16:08:27.501 188707 DEBUG nova.compute.manager [req-7fdc0c10-b0e8-4701-9edc-44919b06e131 req-96c068c5-963d-48a4-a338-8d4d955399ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] No waiting events found dispatching network-vif-plugged-ca1db542-80c5-48ff-a87c-ab82662c6823 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:08:27 compute-0 nova_compute[188703]: 2026-02-24 16:08:27.501 188707 WARNING nova.compute.manager [req-7fdc0c10-b0e8-4701-9edc-44919b06e131 req-96c068c5-963d-48a4-a338-8d4d955399ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received unexpected event network-vif-plugged-ca1db542-80c5-48ff-a87c-ab82662c6823 for instance with vm_state active and task_state None.
Feb 24 16:08:27 compute-0 nova_compute[188703]: 2026-02-24 16:08:27.774 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:28 compute-0 nova_compute[188703]: 2026-02-24 16:08:28.477 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:28 compute-0 nova_compute[188703]: 2026-02-24 16:08:28.687 188707 DEBUG nova.compute.manager [req-35365742-468b-4453-8474-50d82f68a89a req-568fd32e-14ce-4642-b16c-072ee60b10b3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-changed-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:28 compute-0 nova_compute[188703]: 2026-02-24 16:08:28.689 188707 DEBUG nova.compute.manager [req-35365742-468b-4453-8474-50d82f68a89a req-568fd32e-14ce-4642-b16c-072ee60b10b3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Refreshing instance network info cache due to event network-changed-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:08:28 compute-0 nova_compute[188703]: 2026-02-24 16:08:28.690 188707 DEBUG oslo_concurrency.lockutils [req-35365742-468b-4453-8474-50d82f68a89a req-568fd32e-14ce-4642-b16c-072ee60b10b3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:08:28 compute-0 nova_compute[188703]: 2026-02-24 16:08:28.691 188707 DEBUG oslo_concurrency.lockutils [req-35365742-468b-4453-8474-50d82f68a89a req-568fd32e-14ce-4642-b16c-072ee60b10b3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:08:28 compute-0 nova_compute[188703]: 2026-02-24 16:08:28.692 188707 DEBUG nova.network.neutron [req-35365742-468b-4453-8474-50d82f68a89a req-568fd32e-14ce-4642-b16c-072ee60b10b3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Refreshing network info cache for port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.473 188707 DEBUG oslo_concurrency.lockutils [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "e99b5727-be77-4c73-a60b-26188853674c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.477 188707 DEBUG oslo_concurrency.lockutils [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.479 188707 DEBUG oslo_concurrency.lockutils [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "e99b5727-be77-4c73-a60b-26188853674c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.480 188707 DEBUG oslo_concurrency.lockutils [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.481 188707 DEBUG oslo_concurrency.lockutils [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.483 188707 INFO nova.compute.manager [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Terminating instance
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.485 188707 DEBUG nova.compute.manager [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:08:29 compute-0 kernel: tap5d11a818-6e (unregistering): left promiscuous mode
Feb 24 16:08:29 compute-0 NetworkManager[56995]: <info>  [1771949309.5320] device (tap5d11a818-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.566 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:29 compute-0 ovn_controller[98701]: 2026-02-24T16:08:29Z|00084|binding|INFO|Releasing lport 5d11a818-6e9b-4361-9065-15b7ad0b90cb from this chassis (sb_readonly=0)
Feb 24 16:08:29 compute-0 ovn_controller[98701]: 2026-02-24T16:08:29Z|00085|binding|INFO|Setting lport 5d11a818-6e9b-4361-9065-15b7ad0b90cb down in Southbound
Feb 24 16:08:29 compute-0 ovn_controller[98701]: 2026-02-24T16:08:29Z|00086|binding|INFO|Removing iface tap5d11a818-6e ovn-installed in OVS
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.568 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:29 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb 24 16:08:29 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 4.217s CPU time.
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.574 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.575 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:fd:4e 10.100.0.13'], port_security=['fa:16:3e:b3:fd:4e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e99b5727-be77-4c73-a60b-26188853674c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc847011-9955-49b2-86ae-9ffa2a2ca759', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b20224ac879431e8ca256556525e6fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9255168-10aa-492a-b407-feeaa975969c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b77068ac-63eb-4c40-a634-0fd66547868a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=5d11a818-6e9b-4361-9065-15b7ad0b90cb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:08:29 compute-0 systemd-machined[158049]: Machine qemu-8-instance-00000008 terminated.
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.578 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 5d11a818-6e9b-4361-9065-15b7ad0b90cb in datapath cc847011-9955-49b2-86ae-9ffa2a2ca759 unbound from our chassis
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.582 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cc847011-9955-49b2-86ae-9ffa2a2ca759, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.584 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[d9af6d14-6594-4d71-984a-a10a2af2c0f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.586 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759 namespace which is not needed anymore
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.709 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.721 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.744 188707 INFO nova.virt.libvirt.driver [-] [instance: e99b5727-be77-4c73-a60b-26188853674c] Instance destroyed successfully.
Feb 24 16:08:29 compute-0 podman[204685]: time="2026-02-24T16:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:08:29 compute-0 neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759[252514]: [NOTICE]   (252529) : haproxy version is 2.8.14-c23fe91
Feb 24 16:08:29 compute-0 neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759[252514]: [NOTICE]   (252529) : path to executable is /usr/sbin/haproxy
Feb 24 16:08:29 compute-0 neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759[252514]: [WARNING]  (252529) : Exiting Master process...
Feb 24 16:08:29 compute-0 neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759[252514]: [ALERT]    (252529) : Current worker (252533) exited with code 143 (Terminated)
Feb 24 16:08:29 compute-0 neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759[252514]: [WARNING]  (252529) : All workers exited. Exiting... (0)
Feb 24 16:08:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31702 "" "Go-http-client/1.1"
Feb 24 16:08:29 compute-0 systemd[1]: libpod-5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7.scope: Deactivated successfully.
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.754 188707 DEBUG nova.objects.instance [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lazy-loading 'resources' on Instance uuid e99b5727-be77-4c73-a60b-26188853674c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:08:29 compute-0 podman[252568]: 2026-02-24 16:08:29.760667875 +0000 UTC m=+0.061679799 container died 5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.773 188707 DEBUG nova.virt.libvirt.vif [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1722877156',display_name='tempest-ServerAddressesTestJSON-server-1722877156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1722877156',id=8,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:08:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8b20224ac879431e8ca256556525e6fd',ramdisk_id='',reservation_id='r-oar5k54k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-791689798',owner_user_name='tempest-ServerAddressesTestJSON-791689798-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:08:26Z,user_data=None,user_id='5f4c6a77687e48f2b855bd5e6fb8bb86',uuid=e99b5727-be77-4c73-a60b-26188853674c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.775 188707 DEBUG nova.network.os_vif_util [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Converting VIF {"id": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "address": "fa:16:3e:b3:fd:4e", "network": {"id": "cc847011-9955-49b2-86ae-9ffa2a2ca759", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-862981330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b20224ac879431e8ca256556525e6fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d11a818-6e", "ovs_interfaceid": "5d11a818-6e9b-4361-9065-15b7ad0b90cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.776 188707 DEBUG nova.network.os_vif_util [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:fd:4e,bridge_name='br-int',has_traffic_filtering=True,id=5d11a818-6e9b-4361-9065-15b7ad0b90cb,network=Network(cc847011-9955-49b2-86ae-9ffa2a2ca759),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d11a818-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.777 188707 DEBUG os_vif [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:fd:4e,bridge_name='br-int',has_traffic_filtering=True,id=5d11a818-6e9b-4361-9065-15b7ad0b90cb,network=Network(cc847011-9955-49b2-86ae-9ffa2a2ca759),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d11a818-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.779 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.779 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d11a818-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.782 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.785 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.787 188707 INFO os_vif [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:fd:4e,bridge_name='br-int',has_traffic_filtering=True,id=5d11a818-6e9b-4361-9065-15b7ad0b90cb,network=Network(cc847011-9955-49b2-86ae-9ffa2a2ca759),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d11a818-6e')
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.788 188707 INFO nova.virt.libvirt.driver [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Deleting instance files /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c_del
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.788 188707 INFO nova.virt.libvirt.driver [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Deletion of /var/lib/nova/instances/e99b5727-be77-4c73-a60b-26188853674c_del complete
Feb 24 16:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7-userdata-shm.mount: Deactivated successfully.
Feb 24 16:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a326d92ace47a2f272d25f1cd0058e45649e16431536aea170424ddff9d9782-merged.mount: Deactivated successfully.
Feb 24 16:08:29 compute-0 podman[252568]: 2026-02-24 16:08:29.828268008 +0000 UTC m=+0.129279922 container cleanup 5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 16:08:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4850 "" "Go-http-client/1.1"
Feb 24 16:08:29 compute-0 systemd[1]: libpod-conmon-5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7.scope: Deactivated successfully.
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.864 188707 DEBUG nova.compute.manager [req-0d4823f6-3191-482f-85ee-f3e8ecc6117d req-6c307a19-958d-41ff-ba61-d4e5e07fb846 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received event network-changed-ca1db542-80c5-48ff-a87c-ab82662c6823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.865 188707 DEBUG nova.compute.manager [req-0d4823f6-3191-482f-85ee-f3e8ecc6117d req-6c307a19-958d-41ff-ba61-d4e5e07fb846 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Refreshing instance network info cache due to event network-changed-ca1db542-80c5-48ff-a87c-ab82662c6823. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.865 188707 DEBUG oslo_concurrency.lockutils [req-0d4823f6-3191-482f-85ee-f3e8ecc6117d req-6c307a19-958d-41ff-ba61-d4e5e07fb846 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-14d740e5-75fa-4dec-a80f-f967c1cd1930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.866 188707 DEBUG oslo_concurrency.lockutils [req-0d4823f6-3191-482f-85ee-f3e8ecc6117d req-6c307a19-958d-41ff-ba61-d4e5e07fb846 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-14d740e5-75fa-4dec-a80f-f967c1cd1930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.866 188707 DEBUG nova.network.neutron [req-0d4823f6-3191-482f-85ee-f3e8ecc6117d req-6c307a19-958d-41ff-ba61-d4e5e07fb846 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Refreshing network info cache for port ca1db542-80c5-48ff-a87c-ab82662c6823 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.876 188707 INFO nova.compute.manager [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Took 0.39 seconds to destroy the instance on the hypervisor.
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.877 188707 DEBUG oslo.service.loopingcall [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.877 188707 DEBUG nova.compute.manager [-] [instance: e99b5727-be77-4c73-a60b-26188853674c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.878 188707 DEBUG nova.network.neutron [-] [instance: e99b5727-be77-4c73-a60b-26188853674c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:08:29 compute-0 podman[252603]: 2026-02-24 16:08:29.92146983 +0000 UTC m=+0.066890084 container remove 5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.932 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[dcdf4c91-6a30-4f8b-ae1c-68dc3c3c5a16]: (4, ('Tue Feb 24 04:08:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759 (5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7)\n5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7\nTue Feb 24 04:08:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759 (5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7)\n5a1dfd685c9815cf929b81994f2e9b8a44f8cf93781b82f0f852d63641a613c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.933 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6a0ba0-38ba-4b63-8840-bf562373758c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.934 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc847011-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.936 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:29 compute-0 kernel: tapcc847011-90: left promiscuous mode
Feb 24 16:08:29 compute-0 nova_compute[188703]: 2026-02-24 16:08:29.944 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.950 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[e334dcb2-eca4-4d9a-a985-229c931e6bf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.976 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0f0092-df01-4f17-be1a-ec4dce4ace4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.977 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1ea039-7fb3-4718-aa60-980dcc72ddad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:29 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:29.996 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[e4dcac68-8d10-411c-8b13-1ce0dee1b10d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501618, 'reachable_time': 29970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252617, 'error': None, 'target': 'ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:29 compute-0 systemd[1]: run-netns-ovnmeta\x2dcc847011\x2d9955\x2d49b2\x2d86ae\x2d9ffa2a2ca759.mount: Deactivated successfully.
Feb 24 16:08:30 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:30.001 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cc847011-9955-49b2-86ae-9ffa2a2ca759 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:08:30 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:30.002 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba207bf-4f3a-45e9-ba91-c9f3322163d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:31 compute-0 podman[252618]: 2026-02-24 16:08:31.149011383 +0000 UTC m=+0.097030009 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:08:31 compute-0 podman[252619]: 2026-02-24 16:08:31.202885476 +0000 UTC m=+0.151695084 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.232 188707 DEBUG nova.compute.manager [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Received event network-vif-unplugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.234 188707 DEBUG oslo_concurrency.lockutils [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e99b5727-be77-4c73-a60b-26188853674c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.235 188707 DEBUG oslo_concurrency.lockutils [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.236 188707 DEBUG oslo_concurrency.lockutils [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.237 188707 DEBUG nova.compute.manager [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] No waiting events found dispatching network-vif-unplugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.238 188707 DEBUG nova.compute.manager [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Received event network-vif-unplugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.239 188707 DEBUG nova.compute.manager [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Received event network-vif-plugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.240 188707 DEBUG oslo_concurrency.lockutils [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e99b5727-be77-4c73-a60b-26188853674c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.241 188707 DEBUG oslo_concurrency.lockutils [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.242 188707 DEBUG oslo_concurrency.lockutils [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.243 188707 DEBUG nova.compute.manager [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] No waiting events found dispatching network-vif-plugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.243 188707 WARNING nova.compute.manager [req-fea3dd17-53be-485a-9819-3470daa233f2 req-ad10760f-7382-4598-a87b-f3215b3f1e51 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Received unexpected event network-vif-plugged-5d11a818-6e9b-4361-9065-15b7ad0b90cb for instance with vm_state active and task_state deleting.
Feb 24 16:08:31 compute-0 openstack_network_exporter[207830]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:08:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:08:31 compute-0 openstack_network_exporter[207830]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:08:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.900 188707 DEBUG oslo_concurrency.lockutils [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "14d740e5-75fa-4dec-a80f-f967c1cd1930" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.901 188707 DEBUG oslo_concurrency.lockutils [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.902 188707 DEBUG oslo_concurrency.lockutils [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.902 188707 DEBUG oslo_concurrency.lockutils [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.903 188707 DEBUG oslo_concurrency.lockutils [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.904 188707 INFO nova.compute.manager [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Terminating instance
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.905 188707 DEBUG nova.compute.manager [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:08:31 compute-0 kernel: tapca1db542-80 (unregistering): left promiscuous mode
Feb 24 16:08:31 compute-0 NetworkManager[56995]: <info>  [1771949311.9479] device (tapca1db542-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:08:31 compute-0 ovn_controller[98701]: 2026-02-24T16:08:31Z|00087|binding|INFO|Releasing lport ca1db542-80c5-48ff-a87c-ab82662c6823 from this chassis (sb_readonly=0)
Feb 24 16:08:31 compute-0 ovn_controller[98701]: 2026-02-24T16:08:31Z|00088|binding|INFO|Setting lport ca1db542-80c5-48ff-a87c-ab82662c6823 down in Southbound
Feb 24 16:08:31 compute-0 ovn_controller[98701]: 2026-02-24T16:08:31Z|00089|binding|INFO|Removing iface tapca1db542-80 ovn-installed in OVS
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.961 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:31 compute-0 ovn_controller[98701]: 2026-02-24T16:08:31Z|00090|binding|INFO|Releasing lport 262cdbf1-c669-4983-b196-f68920cf4249 from this chassis (sb_readonly=0)
Feb 24 16:08:31 compute-0 ovn_controller[98701]: 2026-02-24T16:08:31Z|00091|binding|INFO|Releasing lport fff73773-7e67-49a8-8c12-c1ecf0743a0a from this chassis (sb_readonly=0)
Feb 24 16:08:31 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:31.970 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:38:17 10.100.0.14'], port_security=['fa:16:3e:13:38:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '14d740e5-75fa-4dec-a80f-f967c1cd1930', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '60a6dc6c7c8f4f06b380816ca7e12999', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e59ac58d-8840-40ec-8ba3-0e4511f74482', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=055b9dea-00a9-4cec-937b-00bb57e166e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=ca1db542-80c5-48ff-a87c-ab82662c6823) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:08:31 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:31.980 108026 INFO neutron.agent.ovn.metadata.agent [-] Port ca1db542-80c5-48ff-a87c-ab82662c6823 in datapath 85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b unbound from our chassis
Feb 24 16:08:31 compute-0 nova_compute[188703]: 2026-02-24 16:08:31.988 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:31 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:31.987 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:08:31 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:31.989 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ea8ae1-8360-44dd-839c-33c6956b9b79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:31 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:31.991 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b namespace which is not needed anymore
Feb 24 16:08:32 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb 24 16:08:32 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000006.scope: Consumed 7.039s CPU time.
Feb 24 16:08:32 compute-0 systemd-machined[158049]: Machine qemu-7-instance-00000006 terminated.
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.053 188707 DEBUG nova.network.neutron [req-35365742-468b-4453-8474-50d82f68a89a req-568fd32e-14ce-4642-b16c-072ee60b10b3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updated VIF entry in instance network info cache for port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.055 188707 DEBUG nova.network.neutron [req-35365742-468b-4453-8474-50d82f68a89a req-568fd32e-14ce-4642-b16c-072ee60b10b3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updating instance_info_cache with network_info: [{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.082 188707 DEBUG nova.network.neutron [-] [instance: e99b5727-be77-4c73-a60b-26188853674c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.101 188707 DEBUG oslo_concurrency.lockutils [req-35365742-468b-4453-8474-50d82f68a89a req-568fd32e-14ce-4642-b16c-072ee60b10b3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.115 188707 INFO nova.compute.manager [-] [instance: e99b5727-be77-4c73-a60b-26188853674c] Took 2.24 seconds to deallocate network for instance.
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.130 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.136 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.172 188707 INFO nova.virt.libvirt.driver [-] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Instance destroyed successfully.
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.173 188707 DEBUG nova.objects.instance [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lazy-loading 'resources' on Instance uuid 14d740e5-75fa-4dec-a80f-f967c1cd1930 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.180 188707 DEBUG oslo_concurrency.lockutils [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.180 188707 DEBUG oslo_concurrency.lockutils [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:32 compute-0 neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b[252399]: [NOTICE]   (252403) : haproxy version is 2.8.14-c23fe91
Feb 24 16:08:32 compute-0 neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b[252399]: [NOTICE]   (252403) : path to executable is /usr/sbin/haproxy
Feb 24 16:08:32 compute-0 neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b[252399]: [WARNING]  (252403) : Exiting Master process...
Feb 24 16:08:32 compute-0 neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b[252399]: [ALERT]    (252403) : Current worker (252405) exited with code 143 (Terminated)
Feb 24 16:08:32 compute-0 neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b[252399]: [WARNING]  (252403) : All workers exited. Exiting... (0)
Feb 24 16:08:32 compute-0 systemd[1]: libpod-9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f.scope: Deactivated successfully.
Feb 24 16:08:32 compute-0 podman[252681]: 2026-02-24 16:08:32.211006771 +0000 UTC m=+0.087691280 container died 9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.214 188707 DEBUG nova.virt.libvirt.vif [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:08:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2029362323',display_name='tempest-ServersTestJSON-server-2029362323',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2029362323',id=6,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUKNv5sASPVVbHbeNJyEv+jTZFVLgLh0UG0y7vF2Z+RF00Ms2LzT+bzZz4VmeRuEyhOSEtsx0wEQBSrjeB6gGMSTlb2QGEtdakWbMAcLEVgfuewmLKJ9ffPHeSzJdqhZQ==',key_name='tempest-keypair-2010026268',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:08:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='60a6dc6c7c8f4f06b380816ca7e12999',ramdisk_id='',reservation_id='r-xiuewpiu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-767085278',owner_user_name='tempest-ServersTestJSON-767085278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:08:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11baf60433794c759ac9aae534db1341',uuid=14d740e5-75fa-4dec-a80f-f967c1cd1930,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.216 188707 DEBUG nova.network.os_vif_util [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Converting VIF {"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.217 188707 DEBUG nova.network.os_vif_util [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:38:17,bridge_name='br-int',has_traffic_filtering=True,id=ca1db542-80c5-48ff-a87c-ab82662c6823,network=Network(85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca1db542-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.217 188707 DEBUG os_vif [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:38:17,bridge_name='br-int',has_traffic_filtering=True,id=ca1db542-80c5-48ff-a87c-ab82662c6823,network=Network(85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca1db542-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.219 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.219 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca1db542-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.223 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.226 188707 INFO os_vif [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:38:17,bridge_name='br-int',has_traffic_filtering=True,id=ca1db542-80c5-48ff-a87c-ab82662c6823,network=Network(85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca1db542-80')
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.226 188707 INFO nova.virt.libvirt.driver [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Deleting instance files /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930_del
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.227 188707 INFO nova.virt.libvirt.driver [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Deletion of /var/lib/nova/instances/14d740e5-75fa-4dec-a80f-f967c1cd1930_del complete
Feb 24 16:08:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f-userdata-shm.mount: Deactivated successfully.
Feb 24 16:08:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a8f0e19fa1a647ff2172beee0193bfbd5242c7358ae198cc9c0bfd15d9694aa-merged.mount: Deactivated successfully.
Feb 24 16:08:32 compute-0 podman[252681]: 2026-02-24 16:08:32.269436889 +0000 UTC m=+0.146121408 container cleanup 9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 24 16:08:32 compute-0 systemd[1]: libpod-conmon-9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f.scope: Deactivated successfully.
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.308 188707 INFO nova.compute.manager [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Took 0.40 seconds to destroy the instance on the hypervisor.
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.309 188707 DEBUG oslo.service.loopingcall [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.310 188707 DEBUG nova.compute.provider_tree [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.311 188707 DEBUG nova.compute.manager [-] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.312 188707 DEBUG nova.network.neutron [-] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.316 188707 DEBUG nova.compute.manager [req-0fa57e34-a3cc-4f44-92d7-14585b36cdf7 req-2fcfce21-e45d-4e8d-a8c6-30cee2ea4329 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e99b5727-be77-4c73-a60b-26188853674c] Received event network-vif-deleted-5d11a818-6e9b-4361-9065-15b7ad0b90cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.331 188707 DEBUG nova.scheduler.client.report [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.363 188707 DEBUG oslo_concurrency.lockutils [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:32 compute-0 podman[252720]: 2026-02-24 16:08:32.374723576 +0000 UTC m=+0.069734793 container remove 9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 16:08:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:32.379 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a267c1-64f7-4263-bedf-c5b7279c3a77]: (4, ('Tue Feb 24 04:08:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b (9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f)\n9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f\nTue Feb 24 04:08:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b (9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f)\n9fbb5304be9766a3856f0093038b4ab63d544d48fbc7ea162800a7838c86a06f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:32.382 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcfcfc6-4121-4582-9a55-99dfa6217b94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:32.383 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85731750-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:08:32 compute-0 kernel: tap85731750-00: left promiscuous mode
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.387 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:32.397 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3a4d70-5aae-49cb-bf16-f1c3475fcf7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.398 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.402 188707 INFO nova.scheduler.client.report [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Deleted allocations for instance e99b5727-be77-4c73-a60b-26188853674c
Feb 24 16:08:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:32.415 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6b3cc7-4052-4d34-9ff5-3252853e84c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:32.417 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[4a87f829-f0bc-4cf4-8e8c-525cbbc3f4bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:32.430 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3b07f0-08ed-4958-b3ee-062c674d07a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501432, 'reachable_time': 32075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252732, 'error': None, 'target': 'ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d85731750\x2d0f54\x2d4a4e\x2dae1a\x2dccd1a0f6aa0b.mount: Deactivated successfully.
Feb 24 16:08:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:32.449 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:08:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:32.449 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[c32ac39f-25b9-406f-8fa0-56c7c6523cd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:08:32 compute-0 nova_compute[188703]: 2026-02-24 16:08:32.495 188707 DEBUG oslo_concurrency.lockutils [None req-1bd19018-af5e-49ad-a198-31451539318f 5f4c6a77687e48f2b855bd5e6fb8bb86 8b20224ac879431e8ca256556525e6fd - - default default] Lock "e99b5727-be77-4c73-a60b-26188853674c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:33 compute-0 nova_compute[188703]: 2026-02-24 16:08:33.302 188707 DEBUG nova.network.neutron [req-0d4823f6-3191-482f-85ee-f3e8ecc6117d req-6c307a19-958d-41ff-ba61-d4e5e07fb846 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Updated VIF entry in instance network info cache for port ca1db542-80c5-48ff-a87c-ab82662c6823. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:08:33 compute-0 nova_compute[188703]: 2026-02-24 16:08:33.304 188707 DEBUG nova.network.neutron [req-0d4823f6-3191-482f-85ee-f3e8ecc6117d req-6c307a19-958d-41ff-ba61-d4e5e07fb846 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Updating instance_info_cache with network_info: [{"id": "ca1db542-80c5-48ff-a87c-ab82662c6823", "address": "fa:16:3e:13:38:17", "network": {"id": "85731750-0f54-4a4e-ae1a-ccd1a0f6aa0b", "bridge": "br-int", "label": "tempest-ServersTestJSON-4040687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60a6dc6c7c8f4f06b380816ca7e12999", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca1db542-80", "ovs_interfaceid": "ca1db542-80c5-48ff-a87c-ab82662c6823", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:33 compute-0 nova_compute[188703]: 2026-02-24 16:08:33.323 188707 DEBUG oslo_concurrency.lockutils [req-0d4823f6-3191-482f-85ee-f3e8ecc6117d req-6c307a19-958d-41ff-ba61-d4e5e07fb846 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-14d740e5-75fa-4dec-a80f-f967c1cd1930" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:08:33 compute-0 nova_compute[188703]: 2026-02-24 16:08:33.480 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.260 188707 DEBUG nova.network.neutron [-] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.282 188707 INFO nova.compute.manager [-] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Took 1.97 seconds to deallocate network for instance.
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.329 188707 DEBUG oslo_concurrency.lockutils [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.330 188707 DEBUG oslo_concurrency.lockutils [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.424 188707 DEBUG nova.compute.provider_tree [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.460 188707 DEBUG nova.scheduler.client.report [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.555 188707 DEBUG nova.compute.manager [req-4c6834b2-963d-4c88-9f72-30e3a50dcccb req-3f2bf56b-b082-42e2-a2a4-499d213012ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received event network-vif-deleted-ca1db542-80c5-48ff-a87c-ab82662c6823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.629 188707 DEBUG oslo_concurrency.lockutils [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.673 188707 INFO nova.scheduler.client.report [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Deleted allocations for instance 14d740e5-75fa-4dec-a80f-f967c1cd1930
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.696 188707 DEBUG nova.compute.manager [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received event network-vif-unplugged-ca1db542-80c5-48ff-a87c-ab82662c6823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.697 188707 DEBUG oslo_concurrency.lockutils [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.697 188707 DEBUG oslo_concurrency.lockutils [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.698 188707 DEBUG oslo_concurrency.lockutils [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.699 188707 DEBUG nova.compute.manager [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] No waiting events found dispatching network-vif-unplugged-ca1db542-80c5-48ff-a87c-ab82662c6823 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.699 188707 WARNING nova.compute.manager [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received unexpected event network-vif-unplugged-ca1db542-80c5-48ff-a87c-ab82662c6823 for instance with vm_state deleted and task_state None.
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.700 188707 DEBUG nova.compute.manager [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received event network-vif-plugged-ca1db542-80c5-48ff-a87c-ab82662c6823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.701 188707 DEBUG oslo_concurrency.lockutils [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.701 188707 DEBUG oslo_concurrency.lockutils [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.702 188707 DEBUG oslo_concurrency.lockutils [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.703 188707 DEBUG nova.compute.manager [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] No waiting events found dispatching network-vif-plugged-ca1db542-80c5-48ff-a87c-ab82662c6823 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.703 188707 WARNING nova.compute.manager [req-d866e51f-18a2-4f40-ac34-39f458a6116e req-916458b2-acb9-45be-a7f0-d7e2aebe0a60 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Received unexpected event network-vif-plugged-ca1db542-80c5-48ff-a87c-ab82662c6823 for instance with vm_state deleted and task_state None.
Feb 24 16:08:34 compute-0 nova_compute[188703]: 2026-02-24 16:08:34.828 188707 DEBUG oslo_concurrency.lockutils [None req-a88d3ce5-687f-466d-90d6-d98516ea4555 11baf60433794c759ac9aae534db1341 60a6dc6c7c8f4f06b380816ca7e12999 - - default default] Lock "14d740e5-75fa-4dec-a80f-f967c1cd1930" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:37 compute-0 podman[252733]: 2026-02-24 16:08:37.120139437 +0000 UTC m=+0.078603819 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:08:37 compute-0 nova_compute[188703]: 2026-02-24 16:08:37.222 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:37 compute-0 ovn_controller[98701]: 2026-02-24T16:08:37Z|00092|binding|INFO|Releasing lport 262cdbf1-c669-4983-b196-f68920cf4249 from this chassis (sb_readonly=0)
Feb 24 16:08:37 compute-0 nova_compute[188703]: 2026-02-24 16:08:37.626 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:38 compute-0 nova_compute[188703]: 2026-02-24 16:08:38.483 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:40 compute-0 nova_compute[188703]: 2026-02-24 16:08:40.991 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:42 compute-0 nova_compute[188703]: 2026-02-24 16:08:42.225 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:43 compute-0 nova_compute[188703]: 2026-02-24 16:08:43.486 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:44 compute-0 nova_compute[188703]: 2026-02-24 16:08:44.481 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:44 compute-0 nova_compute[188703]: 2026-02-24 16:08:44.741 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771949309.7391865, e99b5727-be77-4c73-a60b-26188853674c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:44 compute-0 nova_compute[188703]: 2026-02-24 16:08:44.742 188707 INFO nova.compute.manager [-] [instance: e99b5727-be77-4c73-a60b-26188853674c] VM Stopped (Lifecycle Event)
Feb 24 16:08:44 compute-0 nova_compute[188703]: 2026-02-24 16:08:44.767 188707 DEBUG nova.compute.manager [None req-661707f0-6454-4276-92ea-0ef75fb6f265 - - - - - -] [instance: e99b5727-be77-4c73-a60b-26188853674c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:44 compute-0 ovn_controller[98701]: 2026-02-24T16:08:44Z|00093|binding|INFO|Releasing lport 262cdbf1-c669-4983-b196-f68920cf4249 from this chassis (sb_readonly=0)
Feb 24 16:08:44 compute-0 nova_compute[188703]: 2026-02-24 16:08:44.998 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:46 compute-0 podman[252755]: 2026-02-24 16:08:46.144469095 +0000 UTC m=+0.093726477 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:08:46 compute-0 podman[252756]: 2026-02-24 16:08:46.148409455 +0000 UTC m=+0.098464018 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:08:46 compute-0 nova_compute[188703]: 2026-02-24 16:08:46.578 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:47 compute-0 nova_compute[188703]: 2026-02-24 16:08:47.168 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771949312.16615, 14d740e5-75fa-4dec-a80f-f967c1cd1930 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:08:47 compute-0 nova_compute[188703]: 2026-02-24 16:08:47.169 188707 INFO nova.compute.manager [-] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] VM Stopped (Lifecycle Event)
Feb 24 16:08:47 compute-0 nova_compute[188703]: 2026-02-24 16:08:47.191 188707 DEBUG nova.compute.manager [None req-e675e68d-4a86-4d62-86ba-fe74aab8cba1 - - - - - -] [instance: 14d740e5-75fa-4dec-a80f-f967c1cd1930] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:08:47 compute-0 nova_compute[188703]: 2026-02-24 16:08:47.228 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:48 compute-0 nova_compute[188703]: 2026-02-24 16:08:48.490 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:49 compute-0 nova_compute[188703]: 2026-02-24 16:08:49.979 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:50 compute-0 podman[252797]: 2026-02-24 16:08:50.151314438 +0000 UTC m=+0.096348430 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Feb 24 16:08:50 compute-0 podman[252796]: 2026-02-24 16:08:50.170420657 +0000 UTC m=+0.119660596 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, name=ubi9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_id=kepler, io.buildah.version=1.29.0)
Feb 24 16:08:52 compute-0 nova_compute[188703]: 2026-02-24 16:08:52.232 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:53 compute-0 nova_compute[188703]: 2026-02-24 16:08:53.493 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:55 compute-0 podman[252850]: 2026-02-24 16:08:55.146800687 +0000 UTC m=+0.096034652 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 24 16:08:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:55.736 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:08:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:55.737 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:08:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:08:55.737 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:08:56 compute-0 ovn_controller[98701]: 2026-02-24T16:08:56Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:99:52 10.100.0.7
Feb 24 16:08:56 compute-0 ovn_controller[98701]: 2026-02-24T16:08:56Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:99:52 10.100.0.7
Feb 24 16:08:57 compute-0 nova_compute[188703]: 2026-02-24 16:08:57.236 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:57 compute-0 nova_compute[188703]: 2026-02-24 16:08:57.615 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:58 compute-0 nova_compute[188703]: 2026-02-24 16:08:58.497 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:08:59 compute-0 podman[204685]: time="2026-02-24T16:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:08:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29236 "" "Go-http-client/1.1"
Feb 24 16:08:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Feb 24 16:09:01 compute-0 ovn_controller[98701]: 2026-02-24T16:09:01Z|00094|binding|INFO|Releasing lport 262cdbf1-c669-4983-b196-f68920cf4249 from this chassis (sb_readonly=0)
Feb 24 16:09:01 compute-0 nova_compute[188703]: 2026-02-24 16:09:01.050 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:01 compute-0 openstack_network_exporter[207830]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:09:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:09:01 compute-0 openstack_network_exporter[207830]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:09:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:09:02 compute-0 podman[252873]: 2026-02-24 16:09:02.125449969 +0000 UTC m=+0.077649521 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Feb 24 16:09:02 compute-0 podman[252874]: 2026-02-24 16:09:02.148290562 +0000 UTC m=+0.091964828 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:09:02 compute-0 nova_compute[188703]: 2026-02-24 16:09:02.239 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:02 compute-0 nova_compute[188703]: 2026-02-24 16:09:02.885 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:03 compute-0 nova_compute[188703]: 2026-02-24 16:09:03.499 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:04 compute-0 nova_compute[188703]: 2026-02-24 16:09:04.214 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:04 compute-0 nova_compute[188703]: 2026-02-24 16:09:04.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:06 compute-0 nova_compute[188703]: 2026-02-24 16:09:06.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:06 compute-0 nova_compute[188703]: 2026-02-24 16:09:06.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:09:06 compute-0 nova_compute[188703]: 2026-02-24 16:09:06.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:09:07 compute-0 nova_compute[188703]: 2026-02-24 16:09:07.242 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:07 compute-0 nova_compute[188703]: 2026-02-24 16:09:07.496 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:07 compute-0 nova_compute[188703]: 2026-02-24 16:09:07.496 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:07 compute-0 nova_compute[188703]: 2026-02-24 16:09:07.498 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:09:07 compute-0 nova_compute[188703]: 2026-02-24 16:09:07.498 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4ed039f2-92fd-4c07-9a3c-df2da1172e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:08 compute-0 podman[252916]: 2026-02-24 16:09:08.133365461 +0000 UTC m=+0.082772634 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:09:08 compute-0 nova_compute[188703]: 2026-02-24 16:09:08.502 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:09 compute-0 nova_compute[188703]: 2026-02-24 16:09:09.772 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updating instance_info_cache with network_info: [{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:09 compute-0 nova_compute[188703]: 2026-02-24 16:09:09.789 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:09 compute-0 nova_compute[188703]: 2026-02-24 16:09:09.790 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:09:09 compute-0 nova_compute[188703]: 2026-02-24 16:09:09.790 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:09 compute-0 nova_compute[188703]: 2026-02-24 16:09:09.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:09 compute-0 nova_compute[188703]: 2026-02-24 16:09:09.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:09 compute-0 nova_compute[188703]: 2026-02-24 16:09:09.962 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:09 compute-0 nova_compute[188703]: 2026-02-24 16:09:09.962 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:09:10 compute-0 nova_compute[188703]: 2026-02-24 16:09:10.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:10 compute-0 nova_compute[188703]: 2026-02-24 16:09:10.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:12 compute-0 nova_compute[188703]: 2026-02-24 16:09:12.247 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:13 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:13.050 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:09:13 compute-0 nova_compute[188703]: 2026-02-24 16:09:13.051 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:13 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:13.052 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:09:13 compute-0 nova_compute[188703]: 2026-02-24 16:09:13.213 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:13 compute-0 nova_compute[188703]: 2026-02-24 16:09:13.506 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:13 compute-0 ovn_controller[98701]: 2026-02-24T16:09:13Z|00095|binding|INFO|Releasing lport 262cdbf1-c669-4983-b196-f68920cf4249 from this chassis (sb_readonly=0)
Feb 24 16:09:13 compute-0 nova_compute[188703]: 2026-02-24 16:09:13.609 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:15 compute-0 nova_compute[188703]: 2026-02-24 16:09:15.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:09:15 compute-0 nova_compute[188703]: 2026-02-24 16:09:15.975 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:15 compute-0 nova_compute[188703]: 2026-02-24 16:09:15.976 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:15 compute-0 nova_compute[188703]: 2026-02-24 16:09:15.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:15 compute-0 nova_compute[188703]: 2026-02-24 16:09:15.977 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.056 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.135 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.136 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.192 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.604 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.606 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5161MB free_disk=72.16291046142578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.606 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.606 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.692 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4ed039f2-92fd-4c07-9a3c-df2da1172e12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.693 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.693 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.741 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.765 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.799 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:09:16 compute-0 nova_compute[188703]: 2026-02-24 16:09:16.800 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:17 compute-0 podman[252947]: 2026-02-24 16:09:17.108678902 +0000 UTC m=+0.067840319 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:09:17 compute-0 podman[252948]: 2026-02-24 16:09:17.123718739 +0000 UTC m=+0.072312724 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 24 16:09:17 compute-0 nova_compute[188703]: 2026-02-24 16:09:17.250 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:18 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:18.054 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:18 compute-0 nova_compute[188703]: 2026-02-24 16:09:18.266 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:18 compute-0 nova_compute[188703]: 2026-02-24 16:09:18.509 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:20 compute-0 nova_compute[188703]: 2026-02-24 16:09:20.055 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:20 compute-0 nova_compute[188703]: 2026-02-24 16:09:20.984 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:21 compute-0 podman[252988]: 2026-02-24 16:09:21.153818157 +0000 UTC m=+0.105523074 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, release-0.7.12=, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9)
Feb 24 16:09:21 compute-0 podman[252989]: 2026-02-24 16:09:21.1795493 +0000 UTC m=+0.124344586 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Feb 24 16:09:22 compute-0 nova_compute[188703]: 2026-02-24 16:09:22.253 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:22 compute-0 nova_compute[188703]: 2026-02-24 16:09:22.476 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:23 compute-0 nova_compute[188703]: 2026-02-24 16:09:23.511 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:24 compute-0 ovn_controller[98701]: 2026-02-24T16:09:24Z|00096|binding|INFO|Releasing lport 262cdbf1-c669-4983-b196-f68920cf4249 from this chassis (sb_readonly=0)
Feb 24 16:09:24 compute-0 nova_compute[188703]: 2026-02-24 16:09:24.493 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:25 compute-0 nova_compute[188703]: 2026-02-24 16:09:25.958 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:26 compute-0 podman[253026]: 2026-02-24 16:09:26.157183267 +0000 UTC m=+0.114203485 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, vendor=Red Hat, Inc., version=9.7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7)
Feb 24 16:09:27 compute-0 nova_compute[188703]: 2026-02-24 16:09:27.257 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:28 compute-0 nova_compute[188703]: 2026-02-24 16:09:28.513 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:29 compute-0 podman[204685]: time="2026-02-24T16:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:09:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29236 "" "Go-http-client/1.1"
Feb 24 16:09:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 24 16:09:31 compute-0 openstack_network_exporter[207830]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:09:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:09:31 compute-0 openstack_network_exporter[207830]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:09:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:09:31 compute-0 nova_compute[188703]: 2026-02-24 16:09:31.558 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:32 compute-0 nova_compute[188703]: 2026-02-24 16:09:32.259 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:32 compute-0 nova_compute[188703]: 2026-02-24 16:09:32.272 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:32 compute-0 nova_compute[188703]: 2026-02-24 16:09:32.273 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:32 compute-0 nova_compute[188703]: 2026-02-24 16:09:32.424 188707 DEBUG nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:09:32 compute-0 nova_compute[188703]: 2026-02-24 16:09:32.624 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:32 compute-0 nova_compute[188703]: 2026-02-24 16:09:32.625 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:32 compute-0 nova_compute[188703]: 2026-02-24 16:09:32.639 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:09:32 compute-0 nova_compute[188703]: 2026-02-24 16:09:32.640 188707 INFO nova.compute.claims [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.020 188707 DEBUG nova.compute.provider_tree [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.104 188707 DEBUG nova.scheduler.client.report [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:09:33 compute-0 podman[253047]: 2026-02-24 16:09:33.205039657 +0000 UTC m=+0.156367002 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 16:09:33 compute-0 podman[253048]: 2026-02-24 16:09:33.226749509 +0000 UTC m=+0.173052206 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.309 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.310 188707 DEBUG nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.412 188707 DEBUG nova.objects.instance [None req-cd0943f9-aa8d-42e3-bd85-f623fc3a157d 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lazy-loading 'flavor' on Instance uuid 4ed039f2-92fd-4c07-9a3c-df2da1172e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.456 188707 DEBUG nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.457 188707 DEBUG nova.network.neutron [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.515 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.562 188707 DEBUG oslo_concurrency.lockutils [None req-cd0943f9-aa8d-42e3-bd85-f623fc3a157d 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.563 188707 DEBUG oslo_concurrency.lockutils [None req-cd0943f9-aa8d-42e3-bd85-f623fc3a157d 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquired lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.589 188707 INFO nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.658 188707 DEBUG nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.796 188707 DEBUG nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.799 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.800 188707 INFO nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Creating image(s)
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.805 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.806 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.808 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.840 188707 DEBUG nova.policy [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3fc739339a5496cb9c0e2e0eebefd55', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b63b7c206004c42b699bdc42c129b6b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.844 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.890 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.891 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.891 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.902 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.945 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:33 compute-0 nova_compute[188703]: 2026-02-24 16:09:33.946 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.118 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk 1073741824" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.119 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.119 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.184 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.185 188707 DEBUG nova.virt.disk.api [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Checking if we can resize image /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.186 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.258 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.259 188707 DEBUG nova.virt.disk.api [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Cannot resize image /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.259 188707 DEBUG nova.objects.instance [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lazy-loading 'migration_context' on Instance uuid e365caeb-efd7-437b-aa10-e579f7c99f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.323 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.324 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Ensure instance console log exists: /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.325 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.325 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:34 compute-0 nova_compute[188703]: 2026-02-24 16:09:34.325 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:35 compute-0 nova_compute[188703]: 2026-02-24 16:09:35.581 188707 DEBUG nova.network.neutron [None req-cd0943f9-aa8d-42e3-bd85-f623fc3a157d 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:09:35 compute-0 nova_compute[188703]: 2026-02-24 16:09:35.734 188707 DEBUG nova.compute.manager [req-c6b85df1-ef6d-4246-826b-7cea932ac522 req-8e977cdb-5044-4d9a-b305-55ff556f3ab4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-changed-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:35 compute-0 nova_compute[188703]: 2026-02-24 16:09:35.735 188707 DEBUG nova.compute.manager [req-c6b85df1-ef6d-4246-826b-7cea932ac522 req-8e977cdb-5044-4d9a-b305-55ff556f3ab4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Refreshing instance network info cache due to event network-changed-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:09:35 compute-0 nova_compute[188703]: 2026-02-24 16:09:35.736 188707 DEBUG oslo_concurrency.lockutils [req-c6b85df1-ef6d-4246-826b-7cea932ac522 req-8e977cdb-5044-4d9a-b305-55ff556f3ab4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:35 compute-0 nova_compute[188703]: 2026-02-24 16:09:35.856 188707 DEBUG nova.network.neutron [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Successfully created port: 1c040558-99c8-40bd-8b21-1337faca7edc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:09:37 compute-0 nova_compute[188703]: 2026-02-24 16:09:37.263 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:37 compute-0 nova_compute[188703]: 2026-02-24 16:09:37.452 188707 DEBUG nova.network.neutron [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Successfully updated port: 1c040558-99c8-40bd-8b21-1337faca7edc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:09:37 compute-0 nova_compute[188703]: 2026-02-24 16:09:37.468 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:37 compute-0 nova_compute[188703]: 2026-02-24 16:09:37.469 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquired lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:37 compute-0 nova_compute[188703]: 2026-02-24 16:09:37.469 188707 DEBUG nova.network.neutron [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:09:37 compute-0 nova_compute[188703]: 2026-02-24 16:09:37.820 188707 DEBUG nova.network.neutron [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:09:37 compute-0 nova_compute[188703]: 2026-02-24 16:09:37.849 188707 DEBUG nova.compute.manager [req-d09ee131-acfc-4a6e-896f-76f95964cfc6 req-2a5efab4-bd01-4c09-b4a6-31bb35e4eba6 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received event network-changed-1c040558-99c8-40bd-8b21-1337faca7edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:37 compute-0 nova_compute[188703]: 2026-02-24 16:09:37.849 188707 DEBUG nova.compute.manager [req-d09ee131-acfc-4a6e-896f-76f95964cfc6 req-2a5efab4-bd01-4c09-b4a6-31bb35e4eba6 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Refreshing instance network info cache due to event network-changed-1c040558-99c8-40bd-8b21-1337faca7edc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:09:37 compute-0 nova_compute[188703]: 2026-02-24 16:09:37.850 188707 DEBUG oslo_concurrency.lockutils [req-d09ee131-acfc-4a6e-896f-76f95964cfc6 req-2a5efab4-bd01-4c09-b4a6-31bb35e4eba6 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.228 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "fc3a62d6-b05f-4032-a883-8c231d29ff29" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.229 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.258 188707 DEBUG nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.358 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.359 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.370 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.370 188707 INFO nova.compute.claims [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.412 188707 DEBUG nova.network.neutron [None req-cd0943f9-aa8d-42e3-bd85-f623fc3a157d 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updating instance_info_cache with network_info: [{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.438 188707 DEBUG oslo_concurrency.lockutils [None req-cd0943f9-aa8d-42e3-bd85-f623fc3a157d 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Releasing lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.438 188707 DEBUG nova.compute.manager [None req-cd0943f9-aa8d-42e3-bd85-f623fc3a157d 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.438 188707 DEBUG nova.compute.manager [None req-cd0943f9-aa8d-42e3-bd85-f623fc3a157d 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] network_info to inject: |[{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.440 188707 DEBUG oslo_concurrency.lockutils [req-c6b85df1-ef6d-4246-826b-7cea932ac522 req-8e977cdb-5044-4d9a-b305-55ff556f3ab4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.441 188707 DEBUG nova.network.neutron [req-c6b85df1-ef6d-4246-826b-7cea932ac522 req-8e977cdb-5044-4d9a-b305-55ff556f3ab4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Refreshing network info cache for port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.517 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.522 188707 DEBUG nova.compute.provider_tree [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.538 188707 DEBUG nova.scheduler.client.report [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.561 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.562 188707 DEBUG nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.608 188707 DEBUG nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.609 188707 DEBUG nova.network.neutron [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.629 188707 INFO nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.647 188707 DEBUG nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.730 188707 DEBUG nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.732 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.733 188707 INFO nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Creating image(s)
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.734 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "/var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.735 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "/var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.736 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "/var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.760 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.819 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.820 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.822 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.847 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.908 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.910 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.944 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk 1073741824" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.945 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.946 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.992 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.993 188707 DEBUG nova.virt.disk.api [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Checking if we can resize image /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:09:38 compute-0 nova_compute[188703]: 2026-02-24 16:09:38.993 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.011 188707 DEBUG nova.policy [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7cec00195bca4d15bbb0449e21faedcf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d42735c7eb84888b6c3dca096466e04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.056 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.057 188707 DEBUG nova.virt.disk.api [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Cannot resize image /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.057 188707 DEBUG nova.objects.instance [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lazy-loading 'migration_context' on Instance uuid fc3a62d6-b05f-4032-a883-8c231d29ff29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.085 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.087 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Ensure instance console log exists: /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.088 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.088 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.089 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:39 compute-0 podman[253116]: 2026-02-24 16:09:39.117524454 +0000 UTC m=+0.077367254 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.498 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.794 188707 DEBUG nova.network.neutron [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Updating instance_info_cache with network_info: [{"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.835 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Releasing lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.835 188707 DEBUG nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Instance network_info: |[{"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.835 188707 DEBUG oslo_concurrency.lockutils [req-d09ee131-acfc-4a6e-896f-76f95964cfc6 req-2a5efab4-bd01-4c09-b4a6-31bb35e4eba6 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.836 188707 DEBUG nova.network.neutron [req-d09ee131-acfc-4a6e-896f-76f95964cfc6 req-2a5efab4-bd01-4c09-b4a6-31bb35e4eba6 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Refreshing network info cache for port 1c040558-99c8-40bd-8b21-1337faca7edc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.837 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.838 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Start _get_guest_xml network_info=[{"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.837 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f260291bb90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.848 188707 WARNING nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.849 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4ed039f2-92fd-4c07-9a3c-df2da1172e12 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 16:09:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:39.851 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4ed039f2-92fd-4c07-9a3c-df2da1172e12 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.868 188707 DEBUG nova.virt.libvirt.host [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.869 188707 DEBUG nova.virt.libvirt.host [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.877 188707 DEBUG nova.virt.libvirt.host [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.878 188707 DEBUG nova.virt.libvirt.host [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.879 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.879 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.880 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.880 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.881 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.882 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.882 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.883 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.883 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.884 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.884 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.885 188707 DEBUG nova.virt.hardware [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.892 188707 DEBUG nova.virt.libvirt.vif [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1465534534',display_name='tempest-ServerActionsTestJSON-server-1465534534',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1465534534',id=9,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZyYKPc4V6M0wvJAG9T+xK5LUSQ9T3A/higDLxTyiLxe53PGIkxY4Fvqb7KzGKM0zXSbG9tTOZZ45MmiyEiALztFvtXt0JRIVYKiHXk5B1tyWpIojmBc9p6KFCMGGybeQ==',key_name='tempest-keypair-2037101796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b63b7c206004c42b699bdc42c129b6b',ramdisk_id='',reservation_id='r-5riwlf7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1577843196',owner_user_name='tempest-ServerActionsTestJSON-1577843196-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:09:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e3fc739339a5496cb9c0e2e0eebefd55',uuid=e365caeb-efd7-437b-aa10-e579f7c99f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.892 188707 DEBUG nova.network.os_vif_util [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converting VIF {"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.893 188707 DEBUG nova.network.os_vif_util [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.895 188707 DEBUG nova.objects.instance [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lazy-loading 'pci_devices' on Instance uuid e365caeb-efd7-437b-aa10-e579f7c99f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.912 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <uuid>e365caeb-efd7-437b-aa10-e579f7c99f2b</uuid>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <name>instance-00000009</name>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <nova:name>tempest-ServerActionsTestJSON-server-1465534534</nova:name>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:09:39</nova:creationTime>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:09:39 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:09:39 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:09:39 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:09:39 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:09:39 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:09:39 compute-0 nova_compute[188703]:         <nova:user uuid="e3fc739339a5496cb9c0e2e0eebefd55">tempest-ServerActionsTestJSON-1577843196-project-member</nova:user>
Feb 24 16:09:39 compute-0 nova_compute[188703]:         <nova:project uuid="9b63b7c206004c42b699bdc42c129b6b">tempest-ServerActionsTestJSON-1577843196</nova:project>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="ee41af80-6a60-4735-8135-3a06de2a36b2"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:09:39 compute-0 nova_compute[188703]:         <nova:port uuid="1c040558-99c8-40bd-8b21-1337faca7edc">
Feb 24 16:09:39 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <system>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <entry name="serial">e365caeb-efd7-437b-aa10-e579f7c99f2b</entry>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <entry name="uuid">e365caeb-efd7-437b-aa10-e579f7c99f2b</entry>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     </system>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <os>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   </os>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <features>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   </features>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.config"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:30:c9:69"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <target dev="tap1c040558-99"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/console.log" append="off"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <video>
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     </video>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:09:39 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:09:39 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:09:39 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:09:39 compute-0 nova_compute[188703]: </domain>
Feb 24 16:09:39 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.913 188707 DEBUG nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Preparing to wait for external event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.913 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.914 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.914 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.915 188707 DEBUG nova.virt.libvirt.vif [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1465534534',display_name='tempest-ServerActionsTestJSON-server-1465534534',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1465534534',id=9,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZyYKPc4V6M0wvJAG9T+xK5LUSQ9T3A/higDLxTyiLxe53PGIkxY4Fvqb7KzGKM0zXSbG9tTOZZ45MmiyEiALztFvtXt0JRIVYKiHXk5B1tyWpIojmBc9p6KFCMGGybeQ==',key_name='tempest-keypair-2037101796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b63b7c206004c42b699bdc42c129b6b',ramdisk_id='',reservation_id='r-5riwlf7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1577843196',owner_user_name='tempest-ServerActionsTestJSON-1577843196-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:09:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e3fc739339a5496cb9c0e2e0eebefd55',uuid=e365caeb-efd7-437b-aa10-e579f7c99f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.916 188707 DEBUG nova.network.os_vif_util [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converting VIF {"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.917 188707 DEBUG nova.network.os_vif_util [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.917 188707 DEBUG os_vif [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.918 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.919 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.919 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.924 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.924 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c040558-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.925 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c040558-99, col_values=(('external_ids', {'iface-id': '1c040558-99c8-40bd-8b21-1337faca7edc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:c9:69', 'vm-uuid': 'e365caeb-efd7-437b-aa10-e579f7c99f2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.928 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:39 compute-0 NetworkManager[56995]: <info>  [1771949379.9310] manager: (tap1c040558-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.931 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.938 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:39 compute-0 nova_compute[188703]: 2026-02-24 16:09:39.939 188707 INFO os_vif [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99')
Feb 24 16:09:40 compute-0 nova_compute[188703]: 2026-02-24 16:09:40.004 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:09:40 compute-0 nova_compute[188703]: 2026-02-24 16:09:40.004 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:09:40 compute-0 nova_compute[188703]: 2026-02-24 16:09:40.005 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] No VIF found with MAC fa:16:3e:30:c9:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:09:40 compute-0 nova_compute[188703]: 2026-02-24 16:09:40.005 188707 INFO nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Using config drive
Feb 24 16:09:40 compute-0 nova_compute[188703]: 2026-02-24 16:09:40.066 188707 DEBUG nova.objects.instance [None req-897d8663-f26b-451d-8e20-605ca70f3edb 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lazy-loading 'flavor' on Instance uuid 4ed039f2-92fd-4c07-9a3c-df2da1172e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:40 compute-0 nova_compute[188703]: 2026-02-24 16:09:40.094 188707 DEBUG oslo_concurrency.lockutils [None req-897d8663-f26b-451d-8e20-605ca70f3edb 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.507 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2110 Content-Type: application/json Date: Tue, 24 Feb 2026 16:09:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a88041a4-079e-4f63-b697-c41c879ec53c x-openstack-request-id: req-a88041a4-079e-4f63-b697-c41c879ec53c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.508 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4ed039f2-92fd-4c07-9a3c-df2da1172e12", "name": "tempest-AttachInterfacesUnderV243Test-server-1086727361", "status": "ACTIVE", "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "user_id": "40089d2ccf484a7c9ecdf03cf6fe53bb", "metadata": {}, "hostId": "588aed80712459da36b84a619e5c523a1088eaa060bb9eb80ed673f9", "image": {"id": "ee41af80-6a60-4735-8135-3a06de2a36b2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ee41af80-6a60-4735-8135-3a06de2a36b2"}]}, "flavor": {"id": "3303ac8b-27ad-4047-abf8-38e38cd23b6f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/3303ac8b-27ad-4047-abf8-38e38cd23b6f"}]}, "created": "2026-02-24T16:08:11Z", "updated": "2026-02-24T16:09:38Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1357532606-network": [{"version": 4, "addr": "10.100.0.12", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:1c:99:52"}, {"version": 4, "addr": "10.100.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:1c:99:52"}, {"version": 4, "addr": "192.168.122.227", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:1c:99:52"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4ed039f2-92fd-4c07-9a3c-df2da1172e12"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4ed039f2-92fd-4c07-9a3c-df2da1172e12"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-34402587", "OS-SRV-USG:launched_at": "2026-02-24T16:08:23.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1834275733"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.508 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4ed039f2-92fd-4c07-9a3c-df2da1172e12 used request id req-a88041a4-079e-4f63-b697-c41c879ec53c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.510 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4ed039f2-92fd-4c07-9a3c-df2da1172e12', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1086727361', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ff039f17be824e0da1015761ba1fc96a', 'user_id': '40089d2ccf484a7c9ecdf03cf6fe53bb', 'hostId': '588aed80712459da36b84a619e5c523a1088eaa060bb9eb80ed673f9', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.510 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.511 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T16:09:40.511687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.543 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/memory.usage volume: 45.875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.545 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.545 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T16:09:40.546285) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.564 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.565 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T16:09:40.567563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.572 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4ed039f2-92fd-4c07-9a3c-df2da1172e12 / tapc26aa9e8-b1 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.573 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.576 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T16:09:40.575722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.579 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T16:09:40.578700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.580 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.581 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T16:09:40.580992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.582 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.583 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T16:09:40.583243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.584 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.585 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T16:09:40.585917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.631 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.read.bytes volume: 30099968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.632 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.633 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.635 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T16:09:40.634692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.635 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.636 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T16:09:40.636141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.636 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.read.latency volume: 1115673671 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.637 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.read.latency volume: 98319540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.637 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.639 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/cpu volume: 31490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T16:09:40.638745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.639 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.640 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.read.requests volume: 1082 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T16:09:40.640240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.640 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.641 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.642 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.642 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.642 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T16:09:40.642301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.643 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.644 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T16:09:40.644337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.645 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.645 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.646 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T16:09:40.645716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.646 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.647 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.write.latency volume: 5397600014 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.647 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T16:09:40.647225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.648 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.649 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.write.bytes volume: 73015296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T16:09:40.649385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.650 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.650 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.651 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-24T16:09:40.651660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.652 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1086727361>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1086727361>]
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.654 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.655 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.write.requests volume: 322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T16:09:40.654719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.655 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.657 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T16:09:40.656982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.657 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.657 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.657 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.658 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.658 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T16:09:40.658108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.659 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T16:09:40.659289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.660 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.660 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.660 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.661 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T16:09:40.660324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.661 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.661 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.661 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.661 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.661 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.661 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-24T16:09:40.661391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.661 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1086727361>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1086727361>]
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.662 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.662 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.662 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.662 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.662 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.662 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T16:09:40.662430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.663 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.664 14 DEBUG ceilometer.compute.pollsters [-] 4ed039f2-92fd-4c07-9a3c-df2da1172e12/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.664 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.664 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T16:09:40.664024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.669 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.670 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.670 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.670 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:09:40.670 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:09:40 compute-0 nova_compute[188703]: 2026-02-24 16:09:40.990 188707 INFO nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Creating config drive at /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.config
Feb 24 16:09:40 compute-0 nova_compute[188703]: 2026-02-24 16:09:40.997 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp42nzxb25 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.118 188707 DEBUG oslo_concurrency.processutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp42nzxb25" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:41 compute-0 kernel: tap1c040558-99: entered promiscuous mode
Feb 24 16:09:41 compute-0 NetworkManager[56995]: <info>  [1771949381.1972] manager: (tap1c040558-99): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Feb 24 16:09:41 compute-0 ovn_controller[98701]: 2026-02-24T16:09:41Z|00097|binding|INFO|Claiming lport 1c040558-99c8-40bd-8b21-1337faca7edc for this chassis.
Feb 24 16:09:41 compute-0 ovn_controller[98701]: 2026-02-24T16:09:41Z|00098|binding|INFO|1c040558-99c8-40bd-8b21-1337faca7edc: Claiming fa:16:3e:30:c9:69 10.100.0.10
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.200 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:41 compute-0 ovn_controller[98701]: 2026-02-24T16:09:41Z|00099|binding|INFO|Setting lport 1c040558-99c8-40bd-8b21-1337faca7edc ovn-installed in OVS
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.208 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.215 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:41 compute-0 systemd-udevd[253159]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:09:41 compute-0 systemd-machined[158049]: New machine qemu-9-instance-00000009.
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.248 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:c9:69 10.100.0.10'], port_security=['fa:16:3e:30:c9:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e365caeb-efd7-437b-aa10-e579f7c99f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b63b7c206004c42b699bdc42c129b6b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '302c0bad-634d-4905-abc7-a5c548d119ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92707d4c-a464-49d7-8f37-7fa0e55d12a7, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=1c040558-99c8-40bd-8b21-1337faca7edc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:09:41 compute-0 NetworkManager[56995]: <info>  [1771949381.2504] device (tap1c040558-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:09:41 compute-0 ovn_controller[98701]: 2026-02-24T16:09:41Z|00100|binding|INFO|Setting lport 1c040558-99c8-40bd-8b21-1337faca7edc up in Southbound
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.249 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 1c040558-99c8-40bd-8b21-1337faca7edc in datapath 617264bd-8d71-44c7-9bb9-ef21a37be5eb bound to our chassis
Feb 24 16:09:41 compute-0 NetworkManager[56995]: <info>  [1771949381.2514] device (tap1c040558-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.251 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 617264bd-8d71-44c7-9bb9-ef21a37be5eb
Feb 24 16:09:41 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.263 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[f806de52-c592-4804-aecc-4e3d8d108b88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.265 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap617264bd-81 in ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.267 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap617264bd-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.267 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[500f8108-c3fb-4d66-9386-663aff7ffecf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.268 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[de7c5513-167c-4dd8-bec0-af0670c589ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.282 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[5abe8813-98c3-4ea3-a725-34cf7a749a42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.309 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[f6513329-9fdb-4e6f-9223-b6e067a5e6d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_controller[98701]: 2026-02-24T16:09:41Z|00101|binding|INFO|Releasing lport 262cdbf1-c669-4983-b196-f68920cf4249 from this chassis (sb_readonly=0)
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.334 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[b8383c1e-e5cb-4ae6-aab2-91d78de0e307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 NetworkManager[56995]: <info>  [1771949381.3411] manager: (tap617264bd-80): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.339 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[1b01d9dc-4fc8-405d-9478-207bdc1533a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.364 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[d54a9239-32eb-48e7-a7ae-a1d48b550557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.372 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[399b453c-af99-4e00-928b-8f4a124527f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 NetworkManager[56995]: <info>  [1771949381.3997] device (tap617264bd-80): carrier: link connected
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.407 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6d5d62-3b7c-42fb-b726-1301ba77c47f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.427 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[770065bf-4d4c-40d5-9c8f-290100a0b265]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap617264bd-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:49:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509390, 'reachable_time': 25366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253196, 'error': None, 'target': 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.448 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b59c20-046e-4af0-9cb3-c8112f16387d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:49bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509390, 'tstamp': 509390}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253197, 'error': None, 'target': 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.465 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[99ef7c65-acb7-4b46-9618-97b6f1c20701]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap617264bd-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:49:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509390, 'reachable_time': 25366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253198, 'error': None, 'target': 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.491 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[c359c9b8-fb2b-4e7d-ada8-8801a8d97e0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.539 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[9080236a-aab6-4280-a8a4-e737843b9ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.541 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap617264bd-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.542 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.543 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap617264bd-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:41 compute-0 kernel: tap617264bd-80: entered promiscuous mode
Feb 24 16:09:41 compute-0 NetworkManager[56995]: <info>  [1771949381.5464] manager: (tap617264bd-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.545 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.552 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap617264bd-80, col_values=(('external_ids', {'iface-id': 'd50b9c1e-a71e-49f6-a0bc-95207c7d9dc7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.553 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:41 compute-0 ovn_controller[98701]: 2026-02-24T16:09:41Z|00102|binding|INFO|Releasing lport d50b9c1e-a71e-49f6-a0bc-95207c7d9dc7 from this chassis (sb_readonly=0)
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.555 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/617264bd-8d71-44c7-9bb9-ef21a37be5eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/617264bd-8d71-44c7-9bb9-ef21a37be5eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.556 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[23c08994-fc37-4725-ba01-6826fb13f286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.557 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: global
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-617264bd-8d71-44c7-9bb9-ef21a37be5eb
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/617264bd-8d71-44c7-9bb9-ef21a37be5eb.pid.haproxy
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID 617264bd-8d71-44c7-9bb9-ef21a37be5eb
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 16:09:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:41.558 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'env', 'PROCESS_TAG=haproxy-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/617264bd-8d71-44c7-9bb9-ef21a37be5eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.560 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.698 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949381.6982443, e365caeb-efd7-437b-aa10-e579f7c99f2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.699 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] VM Started (Lifecycle Event)
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.742 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.749 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949381.6983905, e365caeb-efd7-437b-aa10-e579f7c99f2b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.749 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] VM Paused (Lifecycle Event)
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.775 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.780 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.821 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.829 188707 DEBUG nova.network.neutron [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Successfully created port: 89307b57-fe85-45b9-b123-781c385e8fec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.928 188707 DEBUG nova.network.neutron [req-c6b85df1-ef6d-4246-826b-7cea932ac522 req-8e977cdb-5044-4d9a-b305-55ff556f3ab4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updated VIF entry in instance network info cache for port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.929 188707 DEBUG nova.network.neutron [req-c6b85df1-ef6d-4246-826b-7cea932ac522 req-8e977cdb-5044-4d9a-b305-55ff556f3ab4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updating instance_info_cache with network_info: [{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.961 188707 DEBUG oslo_concurrency.lockutils [req-c6b85df1-ef6d-4246-826b-7cea932ac522 req-8e977cdb-5044-4d9a-b305-55ff556f3ab4 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:41 compute-0 nova_compute[188703]: 2026-02-24 16:09:41.961 188707 DEBUG oslo_concurrency.lockutils [None req-897d8663-f26b-451d-8e20-605ca70f3edb 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquired lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:42 compute-0 podman[253236]: 2026-02-24 16:09:41.944157143 +0000 UTC m=+0.043827865 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 16:09:42 compute-0 podman[253236]: 2026-02-24 16:09:42.113456014 +0000 UTC m=+0.213126756 container create ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 24 16:09:42 compute-0 systemd[1]: Started libpod-conmon-ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f.scope.
Feb 24 16:09:42 compute-0 systemd[1]: Started libcrun container.
Feb 24 16:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54297fbc26350a977621c9e4a774aea4d962dd6868d7bdb3781fefd5d5ec894e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 16:09:42 compute-0 podman[253236]: 2026-02-24 16:09:42.230184787 +0000 UTC m=+0.329855519 container init ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 24 16:09:42 compute-0 podman[253236]: 2026-02-24 16:09:42.237679755 +0000 UTC m=+0.337350468 container start ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 24 16:09:42 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[253251]: [NOTICE]   (253255) : New worker (253257) forked
Feb 24 16:09:42 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[253251]: [NOTICE]   (253255) : Loading success.
Feb 24 16:09:42 compute-0 nova_compute[188703]: 2026-02-24 16:09:42.719 188707 DEBUG nova.network.neutron [req-d09ee131-acfc-4a6e-896f-76f95964cfc6 req-2a5efab4-bd01-4c09-b4a6-31bb35e4eba6 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Updated VIF entry in instance network info cache for port 1c040558-99c8-40bd-8b21-1337faca7edc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:09:42 compute-0 nova_compute[188703]: 2026-02-24 16:09:42.721 188707 DEBUG nova.network.neutron [req-d09ee131-acfc-4a6e-896f-76f95964cfc6 req-2a5efab4-bd01-4c09-b4a6-31bb35e4eba6 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Updating instance_info_cache with network_info: [{"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:42 compute-0 nova_compute[188703]: 2026-02-24 16:09:42.741 188707 DEBUG oslo_concurrency.lockutils [req-d09ee131-acfc-4a6e-896f-76f95964cfc6 req-2a5efab4-bd01-4c09-b4a6-31bb35e4eba6 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.093 188707 DEBUG nova.compute.manager [req-5dc6ba4f-2a82-433e-a5b4-fadfd6c97db6 req-16878532-6a88-4106-aaa4-99c2592ad640 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.094 188707 DEBUG oslo_concurrency.lockutils [req-5dc6ba4f-2a82-433e-a5b4-fadfd6c97db6 req-16878532-6a88-4106-aaa4-99c2592ad640 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.094 188707 DEBUG oslo_concurrency.lockutils [req-5dc6ba4f-2a82-433e-a5b4-fadfd6c97db6 req-16878532-6a88-4106-aaa4-99c2592ad640 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.094 188707 DEBUG oslo_concurrency.lockutils [req-5dc6ba4f-2a82-433e-a5b4-fadfd6c97db6 req-16878532-6a88-4106-aaa4-99c2592ad640 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.094 188707 DEBUG nova.compute.manager [req-5dc6ba4f-2a82-433e-a5b4-fadfd6c97db6 req-16878532-6a88-4106-aaa4-99c2592ad640 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Processing event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.095 188707 DEBUG nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.099 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949383.0995855, e365caeb-efd7-437b-aa10-e579f7c99f2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.100 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] VM Resumed (Lifecycle Event)
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.103 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.110 188707 INFO nova.virt.libvirt.driver [-] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Instance spawned successfully.
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.110 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.137 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.146 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.155 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.156 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.157 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.158 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.158 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.159 188707 DEBUG nova.virt.libvirt.driver [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.215 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.262 188707 INFO nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Took 9.46 seconds to spawn the instance on the hypervisor.
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.262 188707 DEBUG nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.329 188707 INFO nova.compute.manager [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Took 10.74 seconds to build instance.
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.350 188707 DEBUG oslo_concurrency.lockutils [None req-9b5171aa-27c6-4474-a3f6-f730062cc941 e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.519 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.882 188707 DEBUG nova.network.neutron [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Successfully updated port: 89307b57-fe85-45b9-b123-781c385e8fec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.908 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.909 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquired lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.909 188707 DEBUG nova.network.neutron [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:09:43 compute-0 nova_compute[188703]: 2026-02-24 16:09:43.922 188707 DEBUG nova.network.neutron [None req-897d8663-f26b-451d-8e20-605ca70f3edb 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:09:44 compute-0 nova_compute[188703]: 2026-02-24 16:09:44.020 188707 DEBUG nova.compute.manager [req-e29c82e8-a7b6-4e98-bf4d-dfaecf75c969 req-99b6f280-9c85-4bbe-9bec-51d02f632e02 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received event network-changed-89307b57-fe85-45b9-b123-781c385e8fec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:44 compute-0 nova_compute[188703]: 2026-02-24 16:09:44.021 188707 DEBUG nova.compute.manager [req-e29c82e8-a7b6-4e98-bf4d-dfaecf75c969 req-99b6f280-9c85-4bbe-9bec-51d02f632e02 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Refreshing instance network info cache due to event network-changed-89307b57-fe85-45b9-b123-781c385e8fec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:09:44 compute-0 nova_compute[188703]: 2026-02-24 16:09:44.021 188707 DEBUG oslo_concurrency.lockutils [req-e29c82e8-a7b6-4e98-bf4d-dfaecf75c969 req-99b6f280-9c85-4bbe-9bec-51d02f632e02 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:44 compute-0 nova_compute[188703]: 2026-02-24 16:09:44.194 188707 DEBUG nova.network.neutron [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:09:44 compute-0 nova_compute[188703]: 2026-02-24 16:09:44.934 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.489 188707 DEBUG nova.compute.manager [req-741a84fa-45d5-4afb-92d0-370fbc0393fc req-0200ddb4-4d8f-492f-8da0-e2ad01fa6d54 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.491 188707 DEBUG oslo_concurrency.lockutils [req-741a84fa-45d5-4afb-92d0-370fbc0393fc req-0200ddb4-4d8f-492f-8da0-e2ad01fa6d54 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.491 188707 DEBUG oslo_concurrency.lockutils [req-741a84fa-45d5-4afb-92d0-370fbc0393fc req-0200ddb4-4d8f-492f-8da0-e2ad01fa6d54 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.492 188707 DEBUG oslo_concurrency.lockutils [req-741a84fa-45d5-4afb-92d0-370fbc0393fc req-0200ddb4-4d8f-492f-8da0-e2ad01fa6d54 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.493 188707 DEBUG nova.compute.manager [req-741a84fa-45d5-4afb-92d0-370fbc0393fc req-0200ddb4-4d8f-492f-8da0-e2ad01fa6d54 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] No waiting events found dispatching network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.494 188707 WARNING nova.compute.manager [req-741a84fa-45d5-4afb-92d0-370fbc0393fc req-0200ddb4-4d8f-492f-8da0-e2ad01fa6d54 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received unexpected event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc for instance with vm_state active and task_state None.
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.848 188707 DEBUG nova.network.neutron [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updating instance_info_cache with network_info: [{"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.877 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Releasing lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.877 188707 DEBUG nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Instance network_info: |[{"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.879 188707 DEBUG oslo_concurrency.lockutils [req-e29c82e8-a7b6-4e98-bf4d-dfaecf75c969 req-99b6f280-9c85-4bbe-9bec-51d02f632e02 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.879 188707 DEBUG nova.network.neutron [req-e29c82e8-a7b6-4e98-bf4d-dfaecf75c969 req-99b6f280-9c85-4bbe-9bec-51d02f632e02 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Refreshing network info cache for port 89307b57-fe85-45b9-b123-781c385e8fec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.885 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Start _get_guest_xml network_info=[{"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.896 188707 WARNING nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.912 188707 DEBUG nova.virt.libvirt.host [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.913 188707 DEBUG nova.virt.libvirt.host [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.920 188707 DEBUG nova.virt.libvirt.host [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.921 188707 DEBUG nova.virt.libvirt.host [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.922 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.923 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.924 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.925 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.925 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.926 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.927 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.928 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.928 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.929 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.930 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.931 188707 DEBUG nova.virt.hardware [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.937 188707 DEBUG nova.virt.libvirt.vif [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:09:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1860691187',display_name='tempest-TestNetworkBasicOps-server-1860691187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1860691187',id=10,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPWQxWtIV9FMssvPhsJSS0b43cQ+JeTN5OmMh7ANpmE26YYNPcHmkssbLiZupNMfTv7+TFqDL55tdsAqB5HmEAQshKtXfoH8ypUBR8AFOF1LF0BF4BWy/RQntVsycbSHuQ==',key_name='tempest-TestNetworkBasicOps-1833619502',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d42735c7eb84888b6c3dca096466e04',ramdisk_id='',reservation_id='r-3kr6aclc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2112956786',owner_user_name='tempest-TestNetworkBasicOps-2112956786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:09:38Z,user_data=None,user_id='7cec00195bca4d15bbb0449e21faedcf',uuid=fc3a62d6-b05f-4032-a883-8c231d29ff29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.939 188707 DEBUG nova.network.os_vif_util [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converting VIF {"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.940 188707 DEBUG nova.network.os_vif_util [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:cd:7f,bridge_name='br-int',has_traffic_filtering=True,id=89307b57-fe85-45b9-b123-781c385e8fec,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89307b57-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.941 188707 DEBUG nova.objects.instance [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc3a62d6-b05f-4032-a883-8c231d29ff29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.959 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <uuid>fc3a62d6-b05f-4032-a883-8c231d29ff29</uuid>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <name>instance-0000000a</name>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <nova:name>tempest-TestNetworkBasicOps-server-1860691187</nova:name>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:09:45</nova:creationTime>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:09:45 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:09:45 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:09:45 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:09:45 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:09:45 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:09:45 compute-0 nova_compute[188703]:         <nova:user uuid="7cec00195bca4d15bbb0449e21faedcf">tempest-TestNetworkBasicOps-2112956786-project-member</nova:user>
Feb 24 16:09:45 compute-0 nova_compute[188703]:         <nova:project uuid="6d42735c7eb84888b6c3dca096466e04">tempest-TestNetworkBasicOps-2112956786</nova:project>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="ee41af80-6a60-4735-8135-3a06de2a36b2"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:09:45 compute-0 nova_compute[188703]:         <nova:port uuid="89307b57-fe85-45b9-b123-781c385e8fec">
Feb 24 16:09:45 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <system>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <entry name="serial">fc3a62d6-b05f-4032-a883-8c231d29ff29</entry>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <entry name="uuid">fc3a62d6-b05f-4032-a883-8c231d29ff29</entry>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     </system>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <os>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   </os>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <features>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   </features>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk.config"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:0c:cd:7f"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <target dev="tap89307b57-fe"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/console.log" append="off"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <video>
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     </video>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:09:45 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:09:45 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:09:45 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:09:45 compute-0 nova_compute[188703]: </domain>
Feb 24 16:09:45 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.960 188707 DEBUG nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Preparing to wait for external event network-vif-plugged-89307b57-fe85-45b9-b123-781c385e8fec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.960 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.961 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.961 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.962 188707 DEBUG nova.virt.libvirt.vif [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:09:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1860691187',display_name='tempest-TestNetworkBasicOps-server-1860691187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1860691187',id=10,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPWQxWtIV9FMssvPhsJSS0b43cQ+JeTN5OmMh7ANpmE26YYNPcHmkssbLiZupNMfTv7+TFqDL55tdsAqB5HmEAQshKtXfoH8ypUBR8AFOF1LF0BF4BWy/RQntVsycbSHuQ==',key_name='tempest-TestNetworkBasicOps-1833619502',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d42735c7eb84888b6c3dca096466e04',ramdisk_id='',reservation_id='r-3kr6aclc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2112956786',owner_user_name='tempest-TestNetworkBasicOps-2112956786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:09:38Z,user_data=None,user_id='7cec00195bca4d15bbb0449e21faedcf',uuid=fc3a62d6-b05f-4032-a883-8c231d29ff29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.962 188707 DEBUG nova.network.os_vif_util [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converting VIF {"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.963 188707 DEBUG nova.network.os_vif_util [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:cd:7f,bridge_name='br-int',has_traffic_filtering=True,id=89307b57-fe85-45b9-b123-781c385e8fec,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89307b57-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.964 188707 DEBUG os_vif [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:cd:7f,bridge_name='br-int',has_traffic_filtering=True,id=89307b57-fe85-45b9-b123-781c385e8fec,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89307b57-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.964 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.965 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.965 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.969 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.969 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89307b57-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.970 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89307b57-fe, col_values=(('external_ids', {'iface-id': '89307b57-fe85-45b9-b123-781c385e8fec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:cd:7f', 'vm-uuid': 'fc3a62d6-b05f-4032-a883-8c231d29ff29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.972 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.974 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:09:45 compute-0 NetworkManager[56995]: <info>  [1771949385.9748] manager: (tap89307b57-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.980 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:45 compute-0 nova_compute[188703]: 2026-02-24 16:09:45.981 188707 INFO os_vif [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:cd:7f,bridge_name='br-int',has_traffic_filtering=True,id=89307b57-fe85-45b9-b123-781c385e8fec,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89307b57-fe')
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.037 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.037 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.038 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] No VIF found with MAC fa:16:3e:0c:cd:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.039 188707 INFO nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Using config drive
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.146 188707 DEBUG nova.compute.manager [req-79ebdc0f-eceb-45ad-8726-91c2eee5a89f req-f8622718-eb5e-4804-bfaf-e795677ad65a 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-changed-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.147 188707 DEBUG nova.compute.manager [req-79ebdc0f-eceb-45ad-8726-91c2eee5a89f req-f8622718-eb5e-4804-bfaf-e795677ad65a 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Refreshing instance network info cache due to event network-changed-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.148 188707 DEBUG oslo_concurrency.lockutils [req-79ebdc0f-eceb-45ad-8726-91c2eee5a89f req-f8622718-eb5e-4804-bfaf-e795677ad65a 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.262 188707 DEBUG nova.network.neutron [None req-897d8663-f26b-451d-8e20-605ca70f3edb 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updating instance_info_cache with network_info: [{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.285 188707 DEBUG oslo_concurrency.lockutils [None req-897d8663-f26b-451d-8e20-605ca70f3edb 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Releasing lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.286 188707 DEBUG nova.compute.manager [None req-897d8663-f26b-451d-8e20-605ca70f3edb 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.286 188707 DEBUG nova.compute.manager [None req-897d8663-f26b-451d-8e20-605ca70f3edb 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] network_info to inject: |[{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.288 188707 DEBUG oslo_concurrency.lockutils [req-79ebdc0f-eceb-45ad-8726-91c2eee5a89f req-f8622718-eb5e-4804-bfaf-e795677ad65a 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.288 188707 DEBUG nova.network.neutron [req-79ebdc0f-eceb-45ad-8726-91c2eee5a89f req-f8622718-eb5e-4804-bfaf-e795677ad65a 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Refreshing network info cache for port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.393 188707 INFO nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Creating config drive at /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk.config
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.400 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8i9qhucq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.524 188707 DEBUG oslo_concurrency.processutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8i9qhucq" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:46 compute-0 kernel: tap89307b57-fe: entered promiscuous mode
Feb 24 16:09:46 compute-0 ovn_controller[98701]: 2026-02-24T16:09:46Z|00103|binding|INFO|Claiming lport 89307b57-fe85-45b9-b123-781c385e8fec for this chassis.
Feb 24 16:09:46 compute-0 ovn_controller[98701]: 2026-02-24T16:09:46Z|00104|binding|INFO|89307b57-fe85-45b9-b123-781c385e8fec: Claiming fa:16:3e:0c:cd:7f 10.100.0.13
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.584 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.595 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:46 compute-0 NetworkManager[56995]: <info>  [1771949386.5979] manager: (tap89307b57-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Feb 24 16:09:46 compute-0 ovn_controller[98701]: 2026-02-24T16:09:46Z|00105|binding|INFO|Setting lport 89307b57-fe85-45b9-b123-781c385e8fec ovn-installed in OVS
Feb 24 16:09:46 compute-0 ovn_controller[98701]: 2026-02-24T16:09:46Z|00106|binding|INFO|Setting lport 89307b57-fe85-45b9-b123-781c385e8fec up in Southbound
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.595 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:cd:7f 10.100.0.13'], port_security=['fa:16:3e:0c:cd:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fc3a62d6-b05f-4032-a883-8c231d29ff29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d42735c7eb84888b6c3dca096466e04', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0df7c72f-5c7a-4af5-b1f1-1b6470a83b83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2db2ff8a-782e-4e32-b2de-a44ea0ff97e9, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=89307b57-fe85-45b9-b123-781c385e8fec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.597 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 89307b57-fe85-45b9-b123-781c385e8fec in datapath aeadce2d-53c4-4727-bbc6-e1191df0ffea bound to our chassis
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.599 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aeadce2d-53c4-4727-bbc6-e1191df0ffea
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.606 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.610 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[04de5055-9399-42eb-bf97-86aee22b262f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.611 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaeadce2d-51 in ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 16:09:46 compute-0 systemd-machined[158049]: New machine qemu-10-instance-0000000a.
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.613 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaeadce2d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.613 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa4fa4a-442d-4325-89fe-13976a524a6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.613 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[22fac6a1-18e3-41d5-980c-9d92e9197b22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.622 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7e48b8-aa0d-4c7e-8c7f-93aed03dd04b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.644 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[da2ba803-1e65-4b13-b2d5-6654ad9bb93e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 systemd-udevd[253292]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:09:46 compute-0 NetworkManager[56995]: <info>  [1771949386.6945] device (tap89307b57-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:09:46 compute-0 NetworkManager[56995]: <info>  [1771949386.6956] device (tap89307b57-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.693 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[6687638a-9531-4d6b-b4f6-574d729257c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 NetworkManager[56995]: <info>  [1771949386.7154] manager: (tapaeadce2d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Feb 24 16:09:46 compute-0 systemd-udevd[253299]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.714 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[1390a27f-642d-4a3f-9646-18ba26085e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.750 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc91977-1af1-45d0-a195-e27fe2245e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.756 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb80e3b-5d33-4fc5-b50d-71a77d4e4d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 NetworkManager[56995]: <info>  [1771949386.7797] device (tapaeadce2d-50): carrier: link connected
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.784 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff7df7-6ff8-4c3b-a43c-46793ddf9773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.803 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[e040e5e4-a107-448d-963a-54486010e01f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaeadce2d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:98:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509928, 'reachable_time': 34363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253320, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.816 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[64c95c7b-b4e7-44cc-ac83-9a6468a94cd8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:9870'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509928, 'tstamp': 509928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253321, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.834 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[08e67338-6c2d-4b22-add4-53d10d76fe63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaeadce2d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:98:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509928, 'reachable_time': 34363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253322, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.858 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7864ff-9851-4eb0-b015-b53d2a28a849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.905 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb29a14-a56d-4870-a5d5-710a5fe02f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.907 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaeadce2d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.907 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.907 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaeadce2d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:46 compute-0 kernel: tapaeadce2d-50: entered promiscuous mode
Feb 24 16:09:46 compute-0 NetworkManager[56995]: <info>  [1771949386.9099] manager: (tapaeadce2d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.917 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaeadce2d-50, col_values=(('external_ids', {'iface-id': 'e6d03cb3-ba09-4724-83d3-edb05289054b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:46 compute-0 ovn_controller[98701]: 2026-02-24T16:09:46Z|00107|binding|INFO|Releasing lport e6d03cb3-ba09-4724-83d3-edb05289054b from this chassis (sb_readonly=0)
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.919 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aeadce2d-53c4-4727-bbc6-e1191df0ffea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aeadce2d-53c4-4727-bbc6-e1191df0ffea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.920 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[1597a118-92f8-4497-9acd-8d25978930f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.923 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: global
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-aeadce2d-53c4-4727-bbc6-e1191df0ffea
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/aeadce2d-53c4-4727-bbc6-e1191df0ffea.pid.haproxy
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID aeadce2d-53c4-4727-bbc6-e1191df0ffea
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 16:09:46 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:46.923 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'env', 'PROCESS_TAG=haproxy-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aeadce2d-53c4-4727-bbc6-e1191df0ffea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 16:09:46 compute-0 nova_compute[188703]: 2026-02-24 16:09:46.920 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.077 188707 DEBUG oslo_concurrency.lockutils [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.078 188707 DEBUG oslo_concurrency.lockutils [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.079 188707 DEBUG oslo_concurrency.lockutils [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.080 188707 DEBUG oslo_concurrency.lockutils [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.081 188707 DEBUG oslo_concurrency.lockutils [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.085 188707 INFO nova.compute.manager [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Terminating instance
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.088 188707 DEBUG nova.compute.manager [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:09:47 compute-0 kernel: tapc26aa9e8-b1 (unregistering): left promiscuous mode
Feb 24 16:09:47 compute-0 NetworkManager[56995]: <info>  [1771949387.1234] device (tapc26aa9e8-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:09:47 compute-0 ovn_controller[98701]: 2026-02-24T16:09:47Z|00108|binding|INFO|Releasing lport c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 from this chassis (sb_readonly=0)
Feb 24 16:09:47 compute-0 ovn_controller[98701]: 2026-02-24T16:09:47Z|00109|binding|INFO|Setting lport c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 down in Southbound
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.130 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:47 compute-0 ovn_controller[98701]: 2026-02-24T16:09:47Z|00110|binding|INFO|Removing iface tapc26aa9e8-b1 ovn-installed in OVS
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.140 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.149 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:47.148 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:99:52 10.100.0.7'], port_security=['fa:16:3e:1c:99:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ed039f2-92fd-4c07-9a3c-df2da1172e12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea8bd642-3dcc-421c-b6d8-009d58526417', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff039f17be824e0da1015761ba1fc96a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9c7d6a6a-c6cc-4a92-80e8-048801b99214', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=300455a1-55a1-4123-af9e-3289e53c8820, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:09:47 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb 24 16:09:47 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000007.scope: Consumed 40.692s CPU time.
Feb 24 16:09:47 compute-0 systemd-machined[158049]: Machine qemu-6-instance-00000007 terminated.
Feb 24 16:09:47 compute-0 podman[253333]: 2026-02-24 16:09:47.219625998 +0000 UTC m=+0.064624801 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:09:47 compute-0 podman[253336]: 2026-02-24 16:09:47.242696116 +0000 UTC m=+0.088000428 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 16:09:47 compute-0 kernel: tapc26aa9e8-b1: entered promiscuous mode
Feb 24 16:09:47 compute-0 NetworkManager[56995]: <info>  [1771949387.3105] manager: (tapc26aa9e8-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Feb 24 16:09:47 compute-0 systemd-udevd[253317]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.312 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:47 compute-0 ovn_controller[98701]: 2026-02-24T16:09:47Z|00111|binding|INFO|Claiming lport c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 for this chassis.
Feb 24 16:09:47 compute-0 ovn_controller[98701]: 2026-02-24T16:09:47Z|00112|binding|INFO|c26aa9e8-b157-4dd8-8c4c-2767f7a725f4: Claiming fa:16:3e:1c:99:52 10.100.0.7
Feb 24 16:09:47 compute-0 kernel: tapc26aa9e8-b1 (unregistering): left promiscuous mode
Feb 24 16:09:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:47.326 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:99:52 10.100.0.7'], port_security=['fa:16:3e:1c:99:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ed039f2-92fd-4c07-9a3c-df2da1172e12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea8bd642-3dcc-421c-b6d8-009d58526417', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff039f17be824e0da1015761ba1fc96a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9c7d6a6a-c6cc-4a92-80e8-048801b99214', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=300455a1-55a1-4123-af9e-3289e53c8820, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.335 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:47 compute-0 ovn_controller[98701]: 2026-02-24T16:09:47Z|00113|binding|INFO|Releasing lport c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 from this chassis (sb_readonly=0)
Feb 24 16:09:47 compute-0 podman[253395]: 2026-02-24 16:09:47.338437689 +0000 UTC m=+0.072082428 container create c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:09:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:47.346 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:99:52 10.100.0.7'], port_security=['fa:16:3e:1c:99:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ed039f2-92fd-4c07-9a3c-df2da1172e12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea8bd642-3dcc-421c-b6d8-009d58526417', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff039f17be824e0da1015761ba1fc96a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9c7d6a6a-c6cc-4a92-80e8-048801b99214', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=300455a1-55a1-4123-af9e-3289e53c8820, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.358 188707 INFO nova.virt.libvirt.driver [-] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Instance destroyed successfully.
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.359 188707 DEBUG nova.objects.instance [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lazy-loading 'resources' on Instance uuid 4ed039f2-92fd-4c07-9a3c-df2da1172e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.376 188707 DEBUG nova.virt.libvirt.vif [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1086727361',display_name='tempest-AttachInterfacesUnderV243Test-server-1086727361',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1086727361',id=7,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFPRzxrgD/mVIpbyPawYD3WAG4tYU0QjDhPzO0JVhM25FdcYSiej4ytWWlcxVnG8odOA7sUe2Mbk3XrtHW6cAicJSYxZJb3LRfc4Sq6paFTTk27LRVk6RCnMPCsWqOW0iw==',key_name='tempest-keypair-34402587',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:08:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff039f17be824e0da1015761ba1fc96a',ramdisk_id='',reservation_id='r-hc07cw81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1803128698',owner_user_name='tempest-AttachInterfacesUnderV243Test-1803128698-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:09:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40089d2ccf484a7c9ecdf03cf6fe53bb',uuid=4ed039f2-92fd-4c07-9a3c-df2da1172e12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.376 188707 DEBUG nova.network.os_vif_util [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Converting VIF {"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.377 188707 DEBUG nova.network.os_vif_util [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1c:99:52,bridge_name='br-int',has_traffic_filtering=True,id=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4,network=Network(ea8bd642-3dcc-421c-b6d8-009d58526417),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc26aa9e8-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.378 188707 DEBUG os_vif [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:99:52,bridge_name='br-int',has_traffic_filtering=True,id=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4,network=Network(ea8bd642-3dcc-421c-b6d8-009d58526417),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc26aa9e8-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.380 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.381 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26aa9e8-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.388 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.392 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:09:47 compute-0 podman[253395]: 2026-02-24 16:09:47.296402345 +0000 UTC m=+0.030047104 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.394 188707 INFO os_vif [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:99:52,bridge_name='br-int',has_traffic_filtering=True,id=c26aa9e8-b157-4dd8-8c4c-2767f7a725f4,network=Network(ea8bd642-3dcc-421c-b6d8-009d58526417),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc26aa9e8-b1')
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.395 188707 INFO nova.virt.libvirt.driver [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Deleting instance files /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12_del
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.395 188707 INFO nova.virt.libvirt.driver [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Deletion of /var/lib/nova/instances/4ed039f2-92fd-4c07-9a3c-df2da1172e12_del complete
Feb 24 16:09:47 compute-0 systemd[1]: Started libpod-conmon-c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97.scope.
Feb 24 16:09:47 compute-0 systemd[1]: Started libcrun container.
Feb 24 16:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/940981fb5d2b8a413d7fff2f4e0ebc8b1e75429d070e17f9456a6798c7aab8ed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.521 188707 INFO nova.compute.manager [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Took 0.43 seconds to destroy the instance on the hypervisor.
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.522 188707 DEBUG oslo.service.loopingcall [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.523 188707 DEBUG nova.compute.manager [-] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.523 188707 DEBUG nova.network.neutron [-] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:09:47 compute-0 podman[253395]: 2026-02-24 16:09:47.563863394 +0000 UTC m=+0.297508123 container init c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 16:09:47 compute-0 podman[253395]: 2026-02-24 16:09:47.578236921 +0000 UTC m=+0.311881650 container start c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 24 16:09:47 compute-0 neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea[253419]: [NOTICE]   (253424) : New worker (253426) forked
Feb 24 16:09:47 compute-0 neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea[253419]: [NOTICE]   (253424) : Loading success.
Feb 24 16:09:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:47.734 108026 INFO neutron.agent.ovn.metadata.agent [-] Port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 in datapath ea8bd642-3dcc-421c-b6d8-009d58526417 unbound from our chassis
Feb 24 16:09:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:47.735 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ea8bd642-3dcc-421c-b6d8-009d58526417, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:09:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:47.737 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b81978-e747-42a4-8d27-9cec67a84323]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:47 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:47.737 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417 namespace which is not needed anymore
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.812 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949387.8113055, fc3a62d6-b05f-4032-a883-8c231d29ff29 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.812 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] VM Started (Lifecycle Event)
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.826 188707 DEBUG nova.network.neutron [req-e29c82e8-a7b6-4e98-bf4d-dfaecf75c969 req-99b6f280-9c85-4bbe-9bec-51d02f632e02 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updated VIF entry in instance network info cache for port 89307b57-fe85-45b9-b123-781c385e8fec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.827 188707 DEBUG nova.network.neutron [req-e29c82e8-a7b6-4e98-bf4d-dfaecf75c969 req-99b6f280-9c85-4bbe-9bec-51d02f632e02 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updating instance_info_cache with network_info: [{"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.899 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.905 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949387.811424, fc3a62d6-b05f-4032-a883-8c231d29ff29 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.905 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] VM Paused (Lifecycle Event)
Feb 24 16:09:47 compute-0 nova_compute[188703]: 2026-02-24 16:09:47.916 188707 DEBUG oslo_concurrency.lockutils [req-e29c82e8-a7b6-4e98-bf4d-dfaecf75c969 req-99b6f280-9c85-4bbe-9bec-51d02f632e02 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:48 compute-0 neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417[252320]: [NOTICE]   (252324) : haproxy version is 2.8.14-c23fe91
Feb 24 16:09:48 compute-0 neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417[252320]: [NOTICE]   (252324) : path to executable is /usr/sbin/haproxy
Feb 24 16:09:48 compute-0 neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417[252320]: [WARNING]  (252324) : Exiting Master process...
Feb 24 16:09:48 compute-0 neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417[252320]: [WARNING]  (252324) : Exiting Master process...
Feb 24 16:09:48 compute-0 neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417[252320]: [ALERT]    (252324) : Current worker (252326) exited with code 143 (Terminated)
Feb 24 16:09:48 compute-0 neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417[252320]: [WARNING]  (252324) : All workers exited. Exiting... (0)
Feb 24 16:09:48 compute-0 systemd[1]: libpod-835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0.scope: Deactivated successfully.
Feb 24 16:09:48 compute-0 podman[253457]: 2026-02-24 16:09:48.015838203 +0000 UTC m=+0.156952969 container died 835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.030 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.036 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.074 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0-userdata-shm.mount: Deactivated successfully.
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.303 188707 DEBUG nova.compute.manager [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received event network-vif-plugged-89307b57-fe85-45b9-b123-781c385e8fec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.304 188707 DEBUG oslo_concurrency.lockutils [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.304 188707 DEBUG oslo_concurrency.lockutils [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.305 188707 DEBUG oslo_concurrency.lockutils [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.306 188707 DEBUG nova.compute.manager [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Processing event network-vif-plugged-89307b57-fe85-45b9-b123-781c385e8fec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.306 188707 DEBUG nova.compute.manager [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received event network-changed-1c040558-99c8-40bd-8b21-1337faca7edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.307 188707 DEBUG nova.compute.manager [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Refreshing instance network info cache due to event network-changed-1c040558-99c8-40bd-8b21-1337faca7edc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.307 188707 DEBUG oslo_concurrency.lockutils [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-771cf174f295ee04ade9366d9b9c8fe0dd382e41789f97d68202d79701c8d3d4-merged.mount: Deactivated successfully.
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.308 188707 DEBUG oslo_concurrency.lockutils [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.309 188707 DEBUG nova.network.neutron [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Refreshing network info cache for port 1c040558-99c8-40bd-8b21-1337faca7edc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.311 188707 DEBUG nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.315 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949388.3150923, fc3a62d6-b05f-4032-a883-8c231d29ff29 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.316 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] VM Resumed (Lifecycle Event)
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.317 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.321 188707 INFO nova.virt.libvirt.driver [-] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Instance spawned successfully.
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.322 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.355 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.360 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.425 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:09:48 compute-0 podman[253457]: 2026-02-24 16:09:48.437960936 +0000 UTC m=+0.579075672 container cleanup 835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 24 16:09:48 compute-0 systemd[1]: libpod-conmon-835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0.scope: Deactivated successfully.
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.522 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.547 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.548 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.549 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.550 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.557 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.562 188707 DEBUG nova.virt.libvirt.driver [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:09:48 compute-0 podman[253483]: 2026-02-24 16:09:48.701315251 +0000 UTC m=+0.232940213 container remove 835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.708 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[56c079d5-6f7b-4669-87a4-72644ffcd276]: (4, ('Tue Feb 24 04:09:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417 (835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0)\n835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0\nTue Feb 24 04:09:48 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417 (835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0)\n835500baef722db0b5eeef9757fb2052cf00f8c15c7cba7cb73927efbe405ae0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.710 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[0aae2dd8-56cf-4e69-95ea-9abb9770e26f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.711 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea8bd642-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:48 compute-0 kernel: tapea8bd642-30: left promiscuous mode
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.715 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.719 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[024c7098-c4cd-4644-965e-f6fc04dcf5d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.724 188707 INFO nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Took 9.99 seconds to spawn the instance on the hypervisor.
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.726 188707 DEBUG nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.726 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.741 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[09148c4f-ab49-4736-bc4e-d54483a3f6dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.743 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[e02e6e9f-1263-42b7-8d28-d546ef6ad393]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.768 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8c7c77-7546-4117-8577-fbd10a2b9392]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501341, 'reachable_time': 39806, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253498, 'error': None, 'target': 'ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.772 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ea8bd642-3dcc-421c-b6d8-009d58526417 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.772 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[69d42d65-b272-49d3-9692-dc0709e47641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:48 compute-0 systemd[1]: run-netns-ovnmeta\x2dea8bd642\x2d3dcc\x2d421c\x2db6d8\x2d009d58526417.mount: Deactivated successfully.
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.774 108026 INFO neutron.agent.ovn.metadata.agent [-] Port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 in datapath ea8bd642-3dcc-421c-b6d8-009d58526417 unbound from our chassis
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.776 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ea8bd642-3dcc-421c-b6d8-009d58526417, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.777 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[38db0d66-34be-485f-95c8-903299267323]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.778 108026 INFO neutron.agent.ovn.metadata.agent [-] Port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 in datapath ea8bd642-3dcc-421c-b6d8-009d58526417 unbound from our chassis
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.781 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ea8bd642-3dcc-421c-b6d8-009d58526417, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:09:48 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:48.781 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[7c61f7ae-11c2-4c23-bcba-732b3d0214dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.873 188707 INFO nova.compute.manager [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Took 10.55 seconds to build instance.
Feb 24 16:09:48 compute-0 nova_compute[188703]: 2026-02-24 16:09:48.957 188707 DEBUG oslo_concurrency.lockutils [None req-abd1ae4d-e2b3-4722-bfe5-d75e5d13515d 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:49 compute-0 nova_compute[188703]: 2026-02-24 16:09:49.001 188707 DEBUG nova.network.neutron [req-79ebdc0f-eceb-45ad-8726-91c2eee5a89f req-f8622718-eb5e-4804-bfaf-e795677ad65a 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updated VIF entry in instance network info cache for port c26aa9e8-b157-4dd8-8c4c-2767f7a725f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:09:49 compute-0 nova_compute[188703]: 2026-02-24 16:09:49.002 188707 DEBUG nova.network.neutron [req-79ebdc0f-eceb-45ad-8726-91c2eee5a89f req-f8622718-eb5e-4804-bfaf-e795677ad65a 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updating instance_info_cache with network_info: [{"id": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "address": "fa:16:3e:1c:99:52", "network": {"id": "ea8bd642-3dcc-421c-b6d8-009d58526417", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1357532606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff039f17be824e0da1015761ba1fc96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc26aa9e8-b1", "ovs_interfaceid": "c26aa9e8-b157-4dd8-8c4c-2767f7a725f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:49 compute-0 nova_compute[188703]: 2026-02-24 16:09:49.400 188707 DEBUG oslo_concurrency.lockutils [req-79ebdc0f-eceb-45ad-8726-91c2eee5a89f req-f8622718-eb5e-4804-bfaf-e795677ad65a 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-4ed039f2-92fd-4c07-9a3c-df2da1172e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.833 188707 DEBUG nova.compute.manager [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received event network-vif-plugged-89307b57-fe85-45b9-b123-781c385e8fec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.833 188707 DEBUG oslo_concurrency.lockutils [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.834 188707 DEBUG oslo_concurrency.lockutils [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.834 188707 DEBUG oslo_concurrency.lockutils [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.835 188707 DEBUG nova.compute.manager [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] No waiting events found dispatching network-vif-plugged-89307b57-fe85-45b9-b123-781c385e8fec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.835 188707 WARNING nova.compute.manager [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received unexpected event network-vif-plugged-89307b57-fe85-45b9-b123-781c385e8fec for instance with vm_state active and task_state None.
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.836 188707 DEBUG nova.compute.manager [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-vif-unplugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.836 188707 DEBUG oslo_concurrency.lockutils [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.836 188707 DEBUG oslo_concurrency.lockutils [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.837 188707 DEBUG oslo_concurrency.lockutils [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.837 188707 DEBUG nova.compute.manager [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] No waiting events found dispatching network-vif-unplugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:09:51 compute-0 nova_compute[188703]: 2026-02-24 16:09:51.838 188707 DEBUG nova.compute.manager [req-8566fc52-57a8-4e45-806b-29cbefe49b54 req-eca5ccea-eb8a-4cb5-8b15-eca1b647af4f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-vif-unplugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:09:52 compute-0 podman[253500]: 2026-02-24 16:09:52.159608698 +0000 UTC m=+0.108028823 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_id=kepler, maintainer=Red Hat, Inc., release=1214.1726694543)
Feb 24 16:09:52 compute-0 podman[253501]: 2026-02-24 16:09:52.161636374 +0000 UTC m=+0.105302568 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ceilometer_agent_ipmi, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.349 188707 DEBUG nova.network.neutron [-] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.383 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.411 188707 DEBUG nova.network.neutron [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Updated VIF entry in instance network info cache for port 1c040558-99c8-40bd-8b21-1337faca7edc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.412 188707 DEBUG nova.network.neutron [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Updating instance_info_cache with network_info: [{"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.523 188707 INFO nova.compute.manager [-] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Took 5.00 seconds to deallocate network for instance.
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.627 188707 DEBUG oslo_concurrency.lockutils [req-d0cbd888-22a8-452a-a4b2-57089b159674 req-b0239f2a-143e-4666-b8f6-6ef08511d9eb 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.661 188707 DEBUG oslo_concurrency.lockutils [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.662 188707 DEBUG oslo_concurrency.lockutils [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.796 188707 DEBUG nova.compute.provider_tree [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.850 188707 DEBUG nova.scheduler.client.report [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.932 188707 DEBUG oslo_concurrency.lockutils [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:52 compute-0 nova_compute[188703]: 2026-02-24 16:09:52.981 188707 INFO nova.scheduler.client.report [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Deleted allocations for instance 4ed039f2-92fd-4c07-9a3c-df2da1172e12
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.189 188707 DEBUG oslo_concurrency.lockutils [None req-ec63c42f-2e06-4588-af60-c35c01d8e6e6 40089d2ccf484a7c9ecdf03cf6fe53bb ff039f17be824e0da1015761ba1fc96a - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.440 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.441 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.524 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.549 188707 DEBUG nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.665 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.666 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.673 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.674 188707 INFO nova.compute.claims [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.814 188707 DEBUG nova.compute.provider_tree [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.850 188707 DEBUG nova.scheduler.client.report [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.873 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.874 188707 DEBUG nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.922 188707 DEBUG nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.923 188707 DEBUG nova.network.neutron [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.927 188707 DEBUG nova.compute.manager [req-13965624-9ef6-46a9-be73-878fa0bd8b71 req-66025d71-c9b1-4e21-a9df-dea978673531 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-vif-plugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.928 188707 DEBUG oslo_concurrency.lockutils [req-13965624-9ef6-46a9-be73-878fa0bd8b71 req-66025d71-c9b1-4e21-a9df-dea978673531 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.928 188707 DEBUG oslo_concurrency.lockutils [req-13965624-9ef6-46a9-be73-878fa0bd8b71 req-66025d71-c9b1-4e21-a9df-dea978673531 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.929 188707 DEBUG oslo_concurrency.lockutils [req-13965624-9ef6-46a9-be73-878fa0bd8b71 req-66025d71-c9b1-4e21-a9df-dea978673531 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4ed039f2-92fd-4c07-9a3c-df2da1172e12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.929 188707 DEBUG nova.compute.manager [req-13965624-9ef6-46a9-be73-878fa0bd8b71 req-66025d71-c9b1-4e21-a9df-dea978673531 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] No waiting events found dispatching network-vif-plugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.930 188707 WARNING nova.compute.manager [req-13965624-9ef6-46a9-be73-878fa0bd8b71 req-66025d71-c9b1-4e21-a9df-dea978673531 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received unexpected event network-vif-plugged-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 for instance with vm_state deleted and task_state None.
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.930 188707 DEBUG nova.compute.manager [req-13965624-9ef6-46a9-be73-878fa0bd8b71 req-66025d71-c9b1-4e21-a9df-dea978673531 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Received event network-vif-deleted-c26aa9e8-b157-4dd8-8c4c-2767f7a725f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.942 188707 INFO nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:09:53 compute-0 nova_compute[188703]: 2026-02-24 16:09:53.960 188707 DEBUG nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.052 188707 DEBUG nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.054 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.054 188707 INFO nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Creating image(s)
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.055 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "/var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.055 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "/var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.056 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "/var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.057 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "bda4b94876a964317b1f9cfba4b35250036d1777" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.057 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "bda4b94876a964317b1f9cfba4b35250036d1777" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.112 188707 DEBUG nova.policy [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95c31253f307489ba7dfda7d2823f04a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:09:54 compute-0 nova_compute[188703]: 2026-02-24 16:09:54.687 188707 DEBUG nova.network.neutron [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Successfully created port: 06aae3cb-60d1-46ff-8ae9-26775338ef60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.370 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.439 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777.part --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.440 188707 DEBUG nova.virt.images [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] c4831085-6e4d-4710-9d1c-263fd9bf6235 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.562 188707 DEBUG nova.privsep.utils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.563 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777.part /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:55.737 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:55.738 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:55.739 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.847 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777.part /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777.converted" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.851 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.921 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777.converted --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.923 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "bda4b94876a964317b1f9cfba4b35250036d1777" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:55 compute-0 nova_compute[188703]: 2026-02-24 16:09:55.951 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.020 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.022 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "bda4b94876a964317b1f9cfba4b35250036d1777" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.024 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "bda4b94876a964317b1f9cfba4b35250036d1777" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.049 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.069 188707 DEBUG nova.compute.manager [req-aba6ada5-dc33-44e9-a5ce-cf88f7fae5ab req-0e4f708f-8e66-48f5-bba0-9b0e51d4b4a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received event network-changed-89307b57-fe85-45b9-b123-781c385e8fec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.070 188707 DEBUG nova.compute.manager [req-aba6ada5-dc33-44e9-a5ce-cf88f7fae5ab req-0e4f708f-8e66-48f5-bba0-9b0e51d4b4a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Refreshing instance network info cache due to event network-changed-89307b57-fe85-45b9-b123-781c385e8fec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.071 188707 DEBUG oslo_concurrency.lockutils [req-aba6ada5-dc33-44e9-a5ce-cf88f7fae5ab req-0e4f708f-8e66-48f5-bba0-9b0e51d4b4a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.072 188707 DEBUG oslo_concurrency.lockutils [req-aba6ada5-dc33-44e9-a5ce-cf88f7fae5ab req-0e4f708f-8e66-48f5-bba0-9b0e51d4b4a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.073 188707 DEBUG nova.network.neutron [req-aba6ada5-dc33-44e9-a5ce-cf88f7fae5ab req-0e4f708f-8e66-48f5-bba0-9b0e51d4b4a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Refreshing network info cache for port 89307b57-fe85-45b9-b123-781c385e8fec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.118 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.120 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777,backing_fmt=raw /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.164 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777,backing_fmt=raw /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.166 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "bda4b94876a964317b1f9cfba4b35250036d1777" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.167 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.218 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.220 188707 DEBUG nova.virt.disk.api [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Checking if we can resize image /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.222 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.301 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.302 188707 DEBUG nova.virt.disk.api [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Cannot resize image /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.303 188707 DEBUG nova.objects.instance [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lazy-loading 'migration_context' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.322 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.323 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Ensure instance console log exists: /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.324 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.325 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.326 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.549 188707 DEBUG nova.network.neutron [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Successfully updated port: 06aae3cb-60d1-46ff-8ae9-26775338ef60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.567 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.568 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.569 188707 DEBUG nova.network.neutron [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.732 188707 DEBUG nova.compute.manager [req-415547f2-c408-443c-bf16-60aadc095dac req-52948be7-bc5b-4c1d-8dd8-54313892b781 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Received event network-changed-06aae3cb-60d1-46ff-8ae9-26775338ef60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.733 188707 DEBUG nova.compute.manager [req-415547f2-c408-443c-bf16-60aadc095dac req-52948be7-bc5b-4c1d-8dd8-54313892b781 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Refreshing instance network info cache due to event network-changed-06aae3cb-60d1-46ff-8ae9-26775338ef60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.733 188707 DEBUG oslo_concurrency.lockutils [req-415547f2-c408-443c-bf16-60aadc095dac req-52948be7-bc5b-4c1d-8dd8-54313892b781 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:09:56 compute-0 nova_compute[188703]: 2026-02-24 16:09:56.802 188707 DEBUG nova.network.neutron [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:09:57 compute-0 podman[253568]: 2026-02-24 16:09:57.153468971 +0000 UTC m=+0.112049095 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 24 16:09:57 compute-0 nova_compute[188703]: 2026-02-24 16:09:57.386 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.438 188707 DEBUG nova.network.neutron [req-aba6ada5-dc33-44e9-a5ce-cf88f7fae5ab req-0e4f708f-8e66-48f5-bba0-9b0e51d4b4a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updated VIF entry in instance network info cache for port 89307b57-fe85-45b9-b123-781c385e8fec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.440 188707 DEBUG nova.network.neutron [req-aba6ada5-dc33-44e9-a5ce-cf88f7fae5ab req-0e4f708f-8e66-48f5-bba0-9b0e51d4b4a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updating instance_info_cache with network_info: [{"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.465 188707 DEBUG oslo_concurrency.lockutils [req-aba6ada5-dc33-44e9-a5ce-cf88f7fae5ab req-0e4f708f-8e66-48f5-bba0-9b0e51d4b4a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.527 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.910 188707 DEBUG nova.network.neutron [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.931 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.931 188707 DEBUG nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Instance network_info: |[{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.932 188707 DEBUG oslo_concurrency.lockutils [req-415547f2-c408-443c-bf16-60aadc095dac req-52948be7-bc5b-4c1d-8dd8-54313892b781 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.932 188707 DEBUG nova.network.neutron [req-415547f2-c408-443c-bf16-60aadc095dac req-52948be7-bc5b-4c1d-8dd8-54313892b781 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Refreshing network info cache for port 06aae3cb-60d1-46ff-8ae9-26775338ef60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.935 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Start _get_guest_xml network_info=[{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:09:43Z,direct_url=<?>,disk_format='qcow2',id=c4831085-6e4d-4710-9d1c-263fd9bf6235,min_disk=0,min_ram=0,name='tempest-scenario-img--996897372',owner='95c31253f307489ba7dfda7d2823f04a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:09:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.945 188707 WARNING nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.956 188707 DEBUG nova.virt.libvirt.host [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.957 188707 DEBUG nova.virt.libvirt.host [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.963 188707 DEBUG nova.virt.libvirt.host [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.964 188707 DEBUG nova.virt.libvirt.host [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.965 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.965 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:09:43Z,direct_url=<?>,disk_format='qcow2',id=c4831085-6e4d-4710-9d1c-263fd9bf6235,min_disk=0,min_ram=0,name='tempest-scenario-img--996897372',owner='95c31253f307489ba7dfda7d2823f04a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:09:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.966 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.967 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.967 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.968 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.969 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.969 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.970 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.970 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.971 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.971 188707 DEBUG nova.virt.hardware [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.975 188707 DEBUG nova.virt.libvirt.vif [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:09:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm',id=11,image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='677c1c47-5c86-4e10-835b-809c15045b3b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95c31253f307489ba7dfda7d2823f04a',ramdisk_id='',reservation_id='r-cidn7nfs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1117509900',owner_user_name='tempest-PrometheusGabbiTest-1117509900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:09:54Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='69d3eddd2a7d49bf9a69e0ccbb00f957',uuid=85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.976 188707 DEBUG nova.network.os_vif_util [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converting VIF {"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.977 188707 DEBUG nova.network.os_vif_util [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:a3:60,bridge_name='br-int',has_traffic_filtering=True,id=06aae3cb-60d1-46ff-8ae9-26775338ef60,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06aae3cb-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.978 188707 DEBUG nova.objects.instance [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lazy-loading 'pci_devices' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:09:58 compute-0 nova_compute[188703]: 2026-02-24 16:09:58.992 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <uuid>85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124</uuid>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <name>instance-0000000b</name>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <nova:name>te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm</nova:name>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:09:58</nova:creationTime>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:09:58 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:09:58 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:09:58 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:09:58 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:09:58 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:09:58 compute-0 nova_compute[188703]:         <nova:user uuid="69d3eddd2a7d49bf9a69e0ccbb00f957">tempest-PrometheusGabbiTest-1117509900-project-member</nova:user>
Feb 24 16:09:58 compute-0 nova_compute[188703]:         <nova:project uuid="95c31253f307489ba7dfda7d2823f04a">tempest-PrometheusGabbiTest-1117509900</nova:project>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="c4831085-6e4d-4710-9d1c-263fd9bf6235"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:09:58 compute-0 nova_compute[188703]:         <nova:port uuid="06aae3cb-60d1-46ff-8ae9-26775338ef60">
Feb 24 16:09:58 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.2.165" ipVersion="4"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <system>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <entry name="serial">85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124</entry>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <entry name="uuid">85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124</entry>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     </system>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <os>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   </os>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <features>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   </features>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.config"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:ab:a3:60"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <target dev="tap06aae3cb-60"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/console.log" append="off"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <video>
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     </video>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:09:58 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:09:58 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:09:58 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:09:58 compute-0 nova_compute[188703]: </domain>
Feb 24 16:09:58 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.002 188707 DEBUG nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Preparing to wait for external event network-vif-plugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.002 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.002 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.002 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.003 188707 DEBUG nova.virt.libvirt.vif [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:09:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm',id=11,image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='677c1c47-5c86-4e10-835b-809c15045b3b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95c31253f307489ba7dfda7d2823f04a',ramdisk_id='',reservation_id='r-cidn7nfs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1117509900',owner_user_name='tempest-PrometheusGabbiTest-1117509900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:09:54Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='69d3eddd2a7d49bf9a69e0ccbb00f957',uuid=85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.003 188707 DEBUG nova.network.os_vif_util [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converting VIF {"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.004 188707 DEBUG nova.network.os_vif_util [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:a3:60,bridge_name='br-int',has_traffic_filtering=True,id=06aae3cb-60d1-46ff-8ae9-26775338ef60,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06aae3cb-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.004 188707 DEBUG os_vif [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:a3:60,bridge_name='br-int',has_traffic_filtering=True,id=06aae3cb-60d1-46ff-8ae9-26775338ef60,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06aae3cb-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.005 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.006 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.007 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.010 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.010 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06aae3cb-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.011 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06aae3cb-60, col_values=(('external_ids', {'iface-id': '06aae3cb-60d1-46ff-8ae9-26775338ef60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:a3:60', 'vm-uuid': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.013 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:59 compute-0 NetworkManager[56995]: <info>  [1771949399.0159] manager: (tap06aae3cb-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.017 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.023 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.024 188707 INFO os_vif [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:a3:60,bridge_name='br-int',has_traffic_filtering=True,id=06aae3cb-60d1-46ff-8ae9-26775338ef60,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06aae3cb-60')
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.090 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.092 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.093 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] No VIF found with MAC fa:16:3e:ab:a3:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.094 188707 INFO nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Using config drive
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.468 188707 INFO nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Creating config drive at /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.config
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.476 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphdbpzodl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.596 188707 DEBUG oslo_concurrency.processutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphdbpzodl" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:09:59 compute-0 kernel: tap06aae3cb-60: entered promiscuous mode
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.652 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:59 compute-0 ovn_controller[98701]: 2026-02-24T16:09:59Z|00114|binding|INFO|Claiming lport 06aae3cb-60d1-46ff-8ae9-26775338ef60 for this chassis.
Feb 24 16:09:59 compute-0 ovn_controller[98701]: 2026-02-24T16:09:59Z|00115|binding|INFO|06aae3cb-60d1-46ff-8ae9-26775338ef60: Claiming fa:16:3e:ab:a3:60 10.100.2.165
Feb 24 16:09:59 compute-0 NetworkManager[56995]: <info>  [1771949399.6667] manager: (tap06aae3cb-60): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.666 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.673 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:a3:60 10.100.2.165'], port_security=['fa:16:3e:ab:a3:60 10.100.2.165'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.165/16', 'neutron:device_id': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9b818-e146-43d5-9aff-1f87311842d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95c31253f307489ba7dfda7d2823f04a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c332945-b8d3-49ba-8675-a4bd059f5256', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dea1c3bb-7b9c-4930-b640-f5e21cc78102, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=06aae3cb-60d1-46ff-8ae9-26775338ef60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.675 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 06aae3cb-60d1-46ff-8ae9-26775338ef60 in datapath 7ba9b818-e146-43d5-9aff-1f87311842d0 bound to our chassis
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.678 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9b818-e146-43d5-9aff-1f87311842d0
Feb 24 16:09:59 compute-0 ovn_controller[98701]: 2026-02-24T16:09:59Z|00116|binding|INFO|Setting lport 06aae3cb-60d1-46ff-8ae9-26775338ef60 ovn-installed in OVS
Feb 24 16:09:59 compute-0 ovn_controller[98701]: 2026-02-24T16:09:59Z|00117|binding|INFO|Setting lport 06aae3cb-60d1-46ff-8ae9-26775338ef60 up in Southbound
Feb 24 16:09:59 compute-0 systemd-machined[158049]: New machine qemu-11-instance-0000000b.
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.686 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.693 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2fae6be0-59d2-4d86-98e8-34afa9357d8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.694 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ba9b818-e1 in ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.695 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ba9b818-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.695 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[88bc0142-e6fc-4206-ba63-11da7627a1b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.696 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0d05bf-b9d0-41e0-818b-7b8013038c35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.713 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[38ff6ddd-37dc-4640-94a0-78ec17e8226f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 systemd-udevd[253612]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:09:59 compute-0 NetworkManager[56995]: <info>  [1771949399.7310] device (tap06aae3cb-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:09:59 compute-0 NetworkManager[56995]: <info>  [1771949399.7345] device (tap06aae3cb-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.733 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b383d8-653d-4383-9fc0-fdb1927656b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 podman[204685]: time="2026-02-24T16:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:09:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30470 "" "Go-http-client/1.1"
Feb 24 16:09:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4849 "" "Go-http-client/1.1"
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.778 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9cd8be-d626-469c-98ae-ea03a1af673b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 NetworkManager[56995]: <info>  [1771949399.7850] manager: (tap7ba9b818-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.786 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dcbd13-d12a-4cc6-bfcc-71d4589e2771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.819 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0fef70-4790-4020-a053-343573307fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.821 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[81487594-1d18-45fd-b70d-5df87213f044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 NetworkManager[56995]: <info>  [1771949399.8406] device (tap7ba9b818-e0): carrier: link connected
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.851 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[16978935-de89-4f81-8447-f9d8135cac19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.867 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb4ddb3-567b-4fd1-bf6c-be0bb0be5bd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9b818-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:80:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511234, 'reachable_time': 36129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253643, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.882 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[bf348a7a-c7a5-4730-bebc-a27411834b0a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:8090'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511234, 'tstamp': 511234}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253644, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.900 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[735de6c4-fb65-4a3c-925b-412585b83138]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9b818-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:80:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511234, 'reachable_time': 36129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253645, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.930 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[8b533460-68fc-458d-b821-ddd902cd461f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.984 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[84306b35-d8bd-4cdc-9ff7-6ef21a36b9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.987 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9b818-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.988 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:09:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:09:59.989 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9b818-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:09:59 compute-0 nova_compute[188703]: 2026-02-24 16:09:59.994 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:09:59 compute-0 kernel: tap7ba9b818-e0: entered promiscuous mode
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.000 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:00 compute-0 NetworkManager[56995]: <info>  [1771949400.0014] manager: (tap7ba9b818-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:00.011 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9b818-e0, col_values=(('external_ids', {'iface-id': '0f982f60-a551-4bd9-8329-8decd220388f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:00 compute-0 ovn_controller[98701]: 2026-02-24T16:10:00Z|00118|binding|INFO|Releasing lport 0f982f60-a551-4bd9-8329-8decd220388f from this chassis (sb_readonly=0)
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.014 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:00.018 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ba9b818-e146-43d5-9aff-1f87311842d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ba9b818-e146-43d5-9aff-1f87311842d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:00.019 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc6c831-200a-4f1c-b5dc-a2239b51eeb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:00.021 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: global
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-7ba9b818-e146-43d5-9aff-1f87311842d0
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/7ba9b818-e146-43d5-9aff-1f87311842d0.pid.haproxy
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID 7ba9b818-e146-43d5-9aff-1f87311842d0
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 16:10:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:00.022 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'env', 'PROCESS_TAG=haproxy-7ba9b818-e146-43d5-9aff-1f87311842d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ba9b818-e146-43d5-9aff-1f87311842d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.030 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.236 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949400.235853, 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.237 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] VM Started (Lifecycle Event)
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.256 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.261 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949400.2359571, 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.261 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] VM Paused (Lifecycle Event)
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.285 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.301 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:10:00 compute-0 nova_compute[188703]: 2026-02-24 16:10:00.320 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:10:00 compute-0 podman[253684]: 2026-02-24 16:10:00.485249317 +0000 UTC m=+0.119189106 container create b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 16:10:00 compute-0 podman[253684]: 2026-02-24 16:10:00.402996533 +0000 UTC m=+0.036936342 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 16:10:00 compute-0 systemd[1]: Started libpod-conmon-b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb.scope.
Feb 24 16:10:00 compute-0 systemd[1]: Started libcrun container.
Feb 24 16:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d2ee765efcbdddfef977ee2894db02f57524e956848e9e330a34079db25ff5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 16:10:00 compute-0 podman[253684]: 2026-02-24 16:10:00.626440641 +0000 UTC m=+0.260380440 container init b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:10:00 compute-0 podman[253684]: 2026-02-24 16:10:00.636712844 +0000 UTC m=+0.270652623 container start b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:10:00 compute-0 neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0[253699]: [NOTICE]   (253703) : New worker (253705) forked
Feb 24 16:10:00 compute-0 neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0[253699]: [NOTICE]   (253703) : Loading success.
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.032 188707 DEBUG nova.compute.manager [req-05c2d732-e026-4c84-a608-f1a1611e5b11 req-28d9d43f-c8bb-4c8e-b915-43e3f1c3f77f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Received event network-vif-plugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.033 188707 DEBUG oslo_concurrency.lockutils [req-05c2d732-e026-4c84-a608-f1a1611e5b11 req-28d9d43f-c8bb-4c8e-b915-43e3f1c3f77f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.033 188707 DEBUG oslo_concurrency.lockutils [req-05c2d732-e026-4c84-a608-f1a1611e5b11 req-28d9d43f-c8bb-4c8e-b915-43e3f1c3f77f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.034 188707 DEBUG oslo_concurrency.lockutils [req-05c2d732-e026-4c84-a608-f1a1611e5b11 req-28d9d43f-c8bb-4c8e-b915-43e3f1c3f77f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.035 188707 DEBUG nova.compute.manager [req-05c2d732-e026-4c84-a608-f1a1611e5b11 req-28d9d43f-c8bb-4c8e-b915-43e3f1c3f77f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Processing event network-vif-plugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.036 188707 DEBUG nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.045 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.046 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949401.0462205, 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.047 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] VM Resumed (Lifecycle Event)
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.055 188707 INFO nova.virt.libvirt.driver [-] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Instance spawned successfully.
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.056 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.082 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.096 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.108 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.113 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.115 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.116 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.118 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.119 188707 DEBUG nova.virt.libvirt.driver [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.134 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.188 188707 INFO nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Took 7.14 seconds to spawn the instance on the hypervisor.
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.189 188707 DEBUG nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.272 188707 INFO nova.compute.manager [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Took 7.62 seconds to build instance.
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.293 188707 DEBUG oslo_concurrency.lockutils [None req-0e70ee3c-e42c-4bb0-bd6d-44bdc650c920 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:01 compute-0 openstack_network_exporter[207830]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:10:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:10:01 compute-0 openstack_network_exporter[207830]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:10:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.525 188707 DEBUG nova.network.neutron [req-415547f2-c408-443c-bf16-60aadc095dac req-52948be7-bc5b-4c1d-8dd8-54313892b781 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updated VIF entry in instance network info cache for port 06aae3cb-60d1-46ff-8ae9-26775338ef60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.525 188707 DEBUG nova.network.neutron [req-415547f2-c408-443c-bf16-60aadc095dac req-52948be7-bc5b-4c1d-8dd8-54313892b781 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:01 compute-0 nova_compute[188703]: 2026-02-24 16:10:01.541 188707 DEBUG oslo_concurrency.lockutils [req-415547f2-c408-443c-bf16-60aadc095dac req-52948be7-bc5b-4c1d-8dd8-54313892b781 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:02 compute-0 nova_compute[188703]: 2026-02-24 16:10:02.354 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771949387.353319, 4ed039f2-92fd-4c07-9a3c-df2da1172e12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:02 compute-0 nova_compute[188703]: 2026-02-24 16:10:02.355 188707 INFO nova.compute.manager [-] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] VM Stopped (Lifecycle Event)
Feb 24 16:10:02 compute-0 nova_compute[188703]: 2026-02-24 16:10:02.377 188707 DEBUG nova.compute.manager [None req-836116fe-1a8c-4b27-af42-78db30af7dd8 - - - - - -] [instance: 4ed039f2-92fd-4c07-9a3c-df2da1172e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:03 compute-0 nova_compute[188703]: 2026-02-24 16:10:03.160 188707 DEBUG nova.compute.manager [req-07de235a-10bf-4412-9b5e-faaa459814be req-c7c4d125-5be2-4eae-94fa-4b9ffb3065cc 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Received event network-vif-plugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:03 compute-0 nova_compute[188703]: 2026-02-24 16:10:03.160 188707 DEBUG oslo_concurrency.lockutils [req-07de235a-10bf-4412-9b5e-faaa459814be req-c7c4d125-5be2-4eae-94fa-4b9ffb3065cc 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:03 compute-0 nova_compute[188703]: 2026-02-24 16:10:03.161 188707 DEBUG oslo_concurrency.lockutils [req-07de235a-10bf-4412-9b5e-faaa459814be req-c7c4d125-5be2-4eae-94fa-4b9ffb3065cc 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:03 compute-0 nova_compute[188703]: 2026-02-24 16:10:03.161 188707 DEBUG oslo_concurrency.lockutils [req-07de235a-10bf-4412-9b5e-faaa459814be req-c7c4d125-5be2-4eae-94fa-4b9ffb3065cc 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:03 compute-0 nova_compute[188703]: 2026-02-24 16:10:03.162 188707 DEBUG nova.compute.manager [req-07de235a-10bf-4412-9b5e-faaa459814be req-c7c4d125-5be2-4eae-94fa-4b9ffb3065cc 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] No waiting events found dispatching network-vif-plugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:10:03 compute-0 nova_compute[188703]: 2026-02-24 16:10:03.162 188707 WARNING nova.compute.manager [req-07de235a-10bf-4412-9b5e-faaa459814be req-c7c4d125-5be2-4eae-94fa-4b9ffb3065cc 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Received unexpected event network-vif-plugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 for instance with vm_state active and task_state None.
Feb 24 16:10:03 compute-0 nova_compute[188703]: 2026-02-24 16:10:03.530 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:03 compute-0 ovn_controller[98701]: 2026-02-24T16:10:03Z|00119|binding|INFO|Releasing lport d50b9c1e-a71e-49f6-a0bc-95207c7d9dc7 from this chassis (sb_readonly=0)
Feb 24 16:10:03 compute-0 ovn_controller[98701]: 2026-02-24T16:10:03Z|00120|binding|INFO|Releasing lport 0f982f60-a551-4bd9-8329-8decd220388f from this chassis (sb_readonly=0)
Feb 24 16:10:03 compute-0 ovn_controller[98701]: 2026-02-24T16:10:03Z|00121|binding|INFO|Releasing lport e6d03cb3-ba09-4724-83d3-edb05289054b from this chassis (sb_readonly=0)
Feb 24 16:10:03 compute-0 nova_compute[188703]: 2026-02-24 16:10:03.897 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:04 compute-0 nova_compute[188703]: 2026-02-24 16:10:04.016 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:04 compute-0 podman[253715]: 2026-02-24 16:10:04.157625945 +0000 UTC m=+0.117627123 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 16:10:04 compute-0 podman[253714]: 2026-02-24 16:10:04.163254061 +0000 UTC m=+0.120536674 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 16:10:04 compute-0 nova_compute[188703]: 2026-02-24 16:10:04.802 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:10:06 compute-0 nova_compute[188703]: 2026-02-24 16:10:06.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:10:07 compute-0 nova_compute[188703]: 2026-02-24 16:10:07.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:10:07 compute-0 nova_compute[188703]: 2026-02-24 16:10:07.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:10:07 compute-0 nova_compute[188703]: 2026-02-24 16:10:07.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:10:08 compute-0 nova_compute[188703]: 2026-02-24 16:10:08.234 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:08 compute-0 nova_compute[188703]: 2026-02-24 16:10:08.235 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:08 compute-0 nova_compute[188703]: 2026-02-24 16:10:08.236 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:10:08 compute-0 nova_compute[188703]: 2026-02-24 16:10:08.236 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e365caeb-efd7-437b-aa10-e579f7c99f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:10:08 compute-0 nova_compute[188703]: 2026-02-24 16:10:08.532 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:09 compute-0 nova_compute[188703]: 2026-02-24 16:10:09.020 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:09 compute-0 nova_compute[188703]: 2026-02-24 16:10:09.256 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:10 compute-0 podman[253760]: 2026-02-24 16:10:10.138898186 +0000 UTC m=+0.096436727 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:10:10 compute-0 nova_compute[188703]: 2026-02-24 16:10:10.483 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Updating instance_info_cache with network_info: [{"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:10 compute-0 nova_compute[188703]: 2026-02-24 16:10:10.502 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:10 compute-0 nova_compute[188703]: 2026-02-24 16:10:10.503 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:10:10 compute-0 nova_compute[188703]: 2026-02-24 16:10:10.504 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:10:10 compute-0 nova_compute[188703]: 2026-02-24 16:10:10.504 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:10:10 compute-0 nova_compute[188703]: 2026-02-24 16:10:10.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:10:10 compute-0 nova_compute[188703]: 2026-02-24 16:10:10.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:10:11 compute-0 nova_compute[188703]: 2026-02-24 16:10:11.309 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:11 compute-0 nova_compute[188703]: 2026-02-24 16:10:11.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:10:11 compute-0 nova_compute[188703]: 2026-02-24 16:10:11.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:10:13 compute-0 nova_compute[188703]: 2026-02-24 16:10:13.261 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:13 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:13.261 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:10:13 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:13.264 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:10:13 compute-0 nova_compute[188703]: 2026-02-24 16:10:13.536 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:14 compute-0 nova_compute[188703]: 2026-02-24 16:10:14.024 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:14 compute-0 ovn_controller[98701]: 2026-02-24T16:10:14Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:c9:69 10.100.0.10
Feb 24 16:10:14 compute-0 ovn_controller[98701]: 2026-02-24T16:10:14Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:c9:69 10.100.0.10
Feb 24 16:10:15 compute-0 nova_compute[188703]: 2026-02-24 16:10:15.205 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:15 compute-0 nova_compute[188703]: 2026-02-24 16:10:15.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:10:15 compute-0 nova_compute[188703]: 2026-02-24 16:10:15.982 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:15 compute-0 nova_compute[188703]: 2026-02-24 16:10:15.983 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:15 compute-0 nova_compute[188703]: 2026-02-24 16:10:15.983 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:15 compute-0 nova_compute[188703]: 2026-02-24 16:10:15.983 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.107 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.190 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.191 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.264 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.270 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.338 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.339 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.403 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.410 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.475 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.477 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.535 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:16 compute-0 nova_compute[188703]: 2026-02-24 16:10:16.634 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.005 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.007 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4815MB free_disk=72.12774658203125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.007 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.008 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.127 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance e365caeb-efd7-437b-aa10-e579f7c99f2b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.128 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fc3a62d6-b05f-4032-a883-8c231d29ff29 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.128 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.129 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.130 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.236 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.254 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.291 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.292 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:17 compute-0 nova_compute[188703]: 2026-02-24 16:10:17.800 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:18 compute-0 podman[253813]: 2026-02-24 16:10:18.163867187 +0000 UTC m=+0.110735612 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:10:18 compute-0 podman[253812]: 2026-02-24 16:10:18.191055589 +0000 UTC m=+0.143820977 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:10:18 compute-0 nova_compute[188703]: 2026-02-24 16:10:18.466 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:18 compute-0 nova_compute[188703]: 2026-02-24 16:10:18.538 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:19 compute-0 nova_compute[188703]: 2026-02-24 16:10:19.026 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:20.267 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:23 compute-0 podman[253864]: 2026-02-24 16:10:23.098216365 +0000 UTC m=+0.061669506 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Feb 24 16:10:23 compute-0 podman[253865]: 2026-02-24 16:10:23.109914488 +0000 UTC m=+0.069133892 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 24 16:10:23 compute-0 nova_compute[188703]: 2026-02-24 16:10:23.541 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:23 compute-0 nova_compute[188703]: 2026-02-24 16:10:23.695 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:23 compute-0 ovn_controller[98701]: 2026-02-24T16:10:23Z|00122|binding|INFO|Releasing lport d50b9c1e-a71e-49f6-a0bc-95207c7d9dc7 from this chassis (sb_readonly=0)
Feb 24 16:10:23 compute-0 ovn_controller[98701]: 2026-02-24T16:10:23Z|00123|binding|INFO|Releasing lport 0f982f60-a551-4bd9-8329-8decd220388f from this chassis (sb_readonly=0)
Feb 24 16:10:23 compute-0 ovn_controller[98701]: 2026-02-24T16:10:23Z|00124|binding|INFO|Releasing lport e6d03cb3-ba09-4724-83d3-edb05289054b from this chassis (sb_readonly=0)
Feb 24 16:10:23 compute-0 nova_compute[188703]: 2026-02-24 16:10:23.865 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:24 compute-0 nova_compute[188703]: 2026-02-24 16:10:24.029 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:24 compute-0 ovn_controller[98701]: 2026-02-24T16:10:24Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:cd:7f 10.100.0.13
Feb 24 16:10:24 compute-0 ovn_controller[98701]: 2026-02-24T16:10:24Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:cd:7f 10.100.0.13
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.092 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "9782c01d-ae8a-45f3-8949-f89d691eba6f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.093 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.119 188707 DEBUG nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.224 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.224 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.233 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.234 188707 INFO nova.compute.claims [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.421 188707 DEBUG nova.compute.provider_tree [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.439 188707 DEBUG nova.scheduler.client.report [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.461 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.462 188707 DEBUG nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.516 188707 DEBUG nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.516 188707 DEBUG nova.network.neutron [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.543 188707 INFO nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.563 188707 DEBUG nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.669 188707 DEBUG nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.671 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.671 188707 INFO nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Creating image(s)
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.672 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "/var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.672 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "/var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.673 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "/var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.691 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.771 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.772 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.773 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.787 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.825 188707 DEBUG nova.policy [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '83d32289882b4c908edcdcb01b704bef', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6ab403a0b36466fb01268fa52ba862e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.846 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.847 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.884 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.885 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.886 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.940 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.942 188707 DEBUG nova.virt.disk.api [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Checking if we can resize image /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:10:25 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.942 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:26 compute-0 nova_compute[188703]: 2026-02-24 16:10:25.999 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:26 compute-0 nova_compute[188703]: 2026-02-24 16:10:26.001 188707 DEBUG nova.virt.disk.api [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Cannot resize image /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:10:26 compute-0 nova_compute[188703]: 2026-02-24 16:10:26.001 188707 DEBUG nova.objects.instance [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lazy-loading 'migration_context' on Instance uuid 9782c01d-ae8a-45f3-8949-f89d691eba6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:10:26 compute-0 nova_compute[188703]: 2026-02-24 16:10:26.015 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:10:26 compute-0 nova_compute[188703]: 2026-02-24 16:10:26.016 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Ensure instance console log exists: /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:10:26 compute-0 nova_compute[188703]: 2026-02-24 16:10:26.017 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:26 compute-0 nova_compute[188703]: 2026-02-24 16:10:26.017 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:26 compute-0 nova_compute[188703]: 2026-02-24 16:10:26.018 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:28 compute-0 podman[253917]: 2026-02-24 16:10:28.128347299 +0000 UTC m=+0.088292472 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal)
Feb 24 16:10:28 compute-0 nova_compute[188703]: 2026-02-24 16:10:28.256 188707 DEBUG nova.network.neutron [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Successfully created port: da05bd1b-9982-49c8-81dd-f58a5c0d1345 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:10:28 compute-0 nova_compute[188703]: 2026-02-24 16:10:28.544 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:29 compute-0 nova_compute[188703]: 2026-02-24 16:10:29.032 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:29 compute-0 nova_compute[188703]: 2026-02-24 16:10:29.389 188707 DEBUG nova.network.neutron [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Successfully updated port: da05bd1b-9982-49c8-81dd-f58a5c0d1345 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:10:29 compute-0 nova_compute[188703]: 2026-02-24 16:10:29.409 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "refresh_cache-9782c01d-ae8a-45f3-8949-f89d691eba6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:29 compute-0 nova_compute[188703]: 2026-02-24 16:10:29.410 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquired lock "refresh_cache-9782c01d-ae8a-45f3-8949-f89d691eba6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:29 compute-0 nova_compute[188703]: 2026-02-24 16:10:29.411 188707 DEBUG nova.network.neutron [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:10:29 compute-0 nova_compute[188703]: 2026-02-24 16:10:29.560 188707 DEBUG nova.compute.manager [req-b76da31b-d907-434d-81e9-70a2933f5455 req-03ac8dd8-daee-4d23-96a5-2592b36c3ca0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received event network-changed-da05bd1b-9982-49c8-81dd-f58a5c0d1345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:29 compute-0 nova_compute[188703]: 2026-02-24 16:10:29.561 188707 DEBUG nova.compute.manager [req-b76da31b-d907-434d-81e9-70a2933f5455 req-03ac8dd8-daee-4d23-96a5-2592b36c3ca0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Refreshing instance network info cache due to event network-changed-da05bd1b-9982-49c8-81dd-f58a5c0d1345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:10:29 compute-0 nova_compute[188703]: 2026-02-24 16:10:29.563 188707 DEBUG oslo_concurrency.lockutils [req-b76da31b-d907-434d-81e9-70a2933f5455 req-03ac8dd8-daee-4d23-96a5-2592b36c3ca0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-9782c01d-ae8a-45f3-8949-f89d691eba6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:29 compute-0 podman[204685]: time="2026-02-24T16:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:10:29 compute-0 nova_compute[188703]: 2026-02-24 16:10:29.746 188707 DEBUG nova.network.neutron [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:10:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31703 "" "Go-http-client/1.1"
Feb 24 16:10:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5316 "" "Go-http-client/1.1"
Feb 24 16:10:31 compute-0 openstack_network_exporter[207830]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:10:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:10:31 compute-0 openstack_network_exporter[207830]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:10:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.482 188707 INFO nova.compute.manager [None req-bd80df86-dbfa-4f1f-a82c-fbfe6e884ce6 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Get console output
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.580 241980 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.716 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.825 188707 DEBUG nova.network.neutron [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Updating instance_info_cache with network_info: [{"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.850 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Releasing lock "refresh_cache-9782c01d-ae8a-45f3-8949-f89d691eba6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.851 188707 DEBUG nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Instance network_info: |[{"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.852 188707 DEBUG oslo_concurrency.lockutils [req-b76da31b-d907-434d-81e9-70a2933f5455 req-03ac8dd8-daee-4d23-96a5-2592b36c3ca0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-9782c01d-ae8a-45f3-8949-f89d691eba6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.853 188707 DEBUG nova.network.neutron [req-b76da31b-d907-434d-81e9-70a2933f5455 req-03ac8dd8-daee-4d23-96a5-2592b36c3ca0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Refreshing network info cache for port da05bd1b-9982-49c8-81dd-f58a5c0d1345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.858 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Start _get_guest_xml network_info=[{"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.867 188707 WARNING nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.878 188707 DEBUG nova.virt.libvirt.host [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.878 188707 DEBUG nova.virt.libvirt.host [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.885 188707 DEBUG nova.virt.libvirt.host [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.886 188707 DEBUG nova.virt.libvirt.host [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.886 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.887 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.887 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.888 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.888 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.888 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.889 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.889 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.889 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.890 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.890 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.890 188707 DEBUG nova.virt.hardware [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.894 188707 DEBUG nova.virt.libvirt.vif [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-983728058',display_name='tempest-ServersTestManualDisk-server-983728058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-983728058',id=12,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOvgey79FSzQjb7HrD3TkBXwwWs6nGsxObUPD+UMS+EVqQThgiBgNRggK3+5NTLKC76Ef5hE+aThJqgnVTB6sqHgRMj4kt5hLD50PIsyX1PwmijWa6hphJLw3rmhCMAuQ==',key_name='tempest-keypair-592323468',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6ab403a0b36466fb01268fa52ba862e',ramdisk_id='',reservation_id='r-e9lq02kz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1363247561',owner_user_name='tempest-ServersTestManualDisk-1363247561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:10:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='83d32289882b4c908edcdcb01b704bef',uuid=9782c01d-ae8a-45f3-8949-f89d691eba6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.894 188707 DEBUG nova.network.os_vif_util [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Converting VIF {"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.895 188707 DEBUG nova.network.os_vif_util [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:bc:a6,bridge_name='br-int',has_traffic_filtering=True,id=da05bd1b-9982-49c8-81dd-f58a5c0d1345,network=Network(ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda05bd1b-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.896 188707 DEBUG nova.objects.instance [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9782c01d-ae8a-45f3-8949-f89d691eba6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.911 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <uuid>9782c01d-ae8a-45f3-8949-f89d691eba6f</uuid>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <name>instance-0000000c</name>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <nova:name>tempest-ServersTestManualDisk-server-983728058</nova:name>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:10:31</nova:creationTime>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:10:31 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:10:31 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:10:31 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:10:31 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:10:31 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:10:31 compute-0 nova_compute[188703]:         <nova:user uuid="83d32289882b4c908edcdcb01b704bef">tempest-ServersTestManualDisk-1363247561-project-member</nova:user>
Feb 24 16:10:31 compute-0 nova_compute[188703]:         <nova:project uuid="e6ab403a0b36466fb01268fa52ba862e">tempest-ServersTestManualDisk-1363247561</nova:project>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="ee41af80-6a60-4735-8135-3a06de2a36b2"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:10:31 compute-0 nova_compute[188703]:         <nova:port uuid="da05bd1b-9982-49c8-81dd-f58a5c0d1345">
Feb 24 16:10:31 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <system>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <entry name="serial">9782c01d-ae8a-45f3-8949-f89d691eba6f</entry>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <entry name="uuid">9782c01d-ae8a-45f3-8949-f89d691eba6f</entry>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     </system>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <os>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   </os>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <features>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   </features>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk.config"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:d6:bc:a6"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <target dev="tapda05bd1b-99"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/console.log" append="off"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <video>
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     </video>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:10:31 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:10:31 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:10:31 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:10:31 compute-0 nova_compute[188703]: </domain>
Feb 24 16:10:31 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.912 188707 DEBUG nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Preparing to wait for external event network-vif-plugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.913 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.913 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.913 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.914 188707 DEBUG nova.virt.libvirt.vif [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-983728058',display_name='tempest-ServersTestManualDisk-server-983728058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-983728058',id=12,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOvgey79FSzQjb7HrD3TkBXwwWs6nGsxObUPD+UMS+EVqQThgiBgNRggK3+5NTLKC76Ef5hE+aThJqgnVTB6sqHgRMj4kt5hLD50PIsyX1PwmijWa6hphJLw3rmhCMAuQ==',key_name='tempest-keypair-592323468',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6ab403a0b36466fb01268fa52ba862e',ramdisk_id='',reservation_id='r-e9lq02kz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1363247561',owner_user_name='tempest-ServersTestManualDisk-1363247561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:10:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='83d32289882b4c908edcdcb01b704bef',uuid=9782c01d-ae8a-45f3-8949-f89d691eba6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.914 188707 DEBUG nova.network.os_vif_util [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Converting VIF {"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.915 188707 DEBUG nova.network.os_vif_util [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:bc:a6,bridge_name='br-int',has_traffic_filtering=True,id=da05bd1b-9982-49c8-81dd-f58a5c0d1345,network=Network(ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda05bd1b-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.915 188707 DEBUG os_vif [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:bc:a6,bridge_name='br-int',has_traffic_filtering=True,id=da05bd1b-9982-49c8-81dd-f58a5c0d1345,network=Network(ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda05bd1b-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.916 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.916 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.917 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.919 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.919 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda05bd1b-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.920 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda05bd1b-99, col_values=(('external_ids', {'iface-id': 'da05bd1b-9982-49c8-81dd-f58a5c0d1345', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:bc:a6', 'vm-uuid': '9782c01d-ae8a-45f3-8949-f89d691eba6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.921 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:31 compute-0 NetworkManager[56995]: <info>  [1771949431.9238] manager: (tapda05bd1b-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.928 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.933 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:31 compute-0 nova_compute[188703]: 2026-02-24 16:10:31.935 188707 INFO os_vif [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:bc:a6,bridge_name='br-int',has_traffic_filtering=True,id=da05bd1b-9982-49c8-81dd-f58a5c0d1345,network=Network(ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda05bd1b-99')
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.006 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.007 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.007 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] No VIF found with MAC fa:16:3e:d6:bc:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.007 188707 INFO nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Using config drive
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.662 188707 INFO nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Creating config drive at /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk.config
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.671 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4i1fasdl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.795 188707 DEBUG oslo_concurrency.processutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4i1fasdl" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:32 compute-0 kernel: tapda05bd1b-99: entered promiscuous mode
Feb 24 16:10:32 compute-0 NetworkManager[56995]: <info>  [1771949432.8515] manager: (tapda05bd1b-99): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.853 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:32 compute-0 ovn_controller[98701]: 2026-02-24T16:10:32Z|00125|binding|INFO|Claiming lport da05bd1b-9982-49c8-81dd-f58a5c0d1345 for this chassis.
Feb 24 16:10:32 compute-0 ovn_controller[98701]: 2026-02-24T16:10:32Z|00126|binding|INFO|da05bd1b-9982-49c8-81dd-f58a5c0d1345: Claiming fa:16:3e:d6:bc:a6 10.100.0.10
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.859 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:bc:a6 10.100.0.10'], port_security=['fa:16:3e:d6:bc:a6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9782c01d-ae8a-45f3-8949-f89d691eba6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6ab403a0b36466fb01268fa52ba862e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4560abf-06f9-4335-9755-51efb753bd43', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a94e8151-aa5c-447d-9a29-27f3b61e8a71, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=da05bd1b-9982-49c8-81dd-f58a5c0d1345) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.861 108026 INFO neutron.agent.ovn.metadata.agent [-] Port da05bd1b-9982-49c8-81dd-f58a5c0d1345 in datapath ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a bound to our chassis
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.864 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a
Feb 24 16:10:32 compute-0 ovn_controller[98701]: 2026-02-24T16:10:32Z|00127|binding|INFO|Setting lport da05bd1b-9982-49c8-81dd-f58a5c0d1345 ovn-installed in OVS
Feb 24 16:10:32 compute-0 ovn_controller[98701]: 2026-02-24T16:10:32Z|00128|binding|INFO|Setting lport da05bd1b-9982-49c8-81dd-f58a5c0d1345 up in Southbound
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.869 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:32 compute-0 nova_compute[188703]: 2026-02-24 16:10:32.873 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.876 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[9401ec34-6246-4079-97c6-7e67268e41b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.876 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef4b7dfc-51 in ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.879 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef4b7dfc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.879 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[4dff9b39-ef04-4e3f-a1b8-fba42fb7c6ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.881 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4ad8fb-373f-4ebc-8f0b-4137eb3a8d05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.889 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[5235c58e-8235-47b0-b582-a71ac4cf5cb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:32 compute-0 systemd-machined[158049]: New machine qemu-12-instance-0000000c.
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.912 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[95d6ae1d-c031-4e97-9884-51abdc482e68]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:32 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Feb 24 16:10:32 compute-0 systemd-udevd[253973]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:10:32 compute-0 NetworkManager[56995]: <info>  [1771949432.9343] device (tapda05bd1b-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:10:32 compute-0 NetworkManager[56995]: <info>  [1771949432.9381] device (tapda05bd1b-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.940 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[a2adc5da-0f31-4e53-ab11-3066cd2bae6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.949 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[5612ea4b-50ca-4d0b-9de0-ae718c8069de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:32 compute-0 NetworkManager[56995]: <info>  [1771949432.9515] manager: (tapef4b7dfc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Feb 24 16:10:32 compute-0 systemd-udevd[253977]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.972 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[03612092-e34f-497e-8f72-17a49d7b98a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.975 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[87d7872e-533a-480c-81eb-303c457a175e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:32 compute-0 NetworkManager[56995]: <info>  [1771949432.9913] device (tapef4b7dfc-50): carrier: link connected
Feb 24 16:10:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:32.995 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[662781f7-4466-4fbb-a5e2-6c3e790d700e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.011 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[22a80c53-4faf-4004-8a88-2dfc92ca6c31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef4b7dfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:e2:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514549, 'reachable_time': 19824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254003, 'error': None, 'target': 'ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.024 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[07ef4e95-18d0-4f34-859a-7adcbedebb63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:e223'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514549, 'tstamp': 514549}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254004, 'error': None, 'target': 'ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.037 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[8711c243-aba1-46eb-851f-6fdcc76c6f51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef4b7dfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:e2:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514549, 'reachable_time': 19824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254005, 'error': None, 'target': 'ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.067 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[82fe9cd6-d5d3-4de2-a02b-806fbe51ab7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.122 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[986900f9-03ab-4111-9440-baa04e0df69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.124 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef4b7dfc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.124 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.125 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef4b7dfc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:33 compute-0 NetworkManager[56995]: <info>  [1771949433.1293] manager: (tapef4b7dfc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Feb 24 16:10:33 compute-0 kernel: tapef4b7dfc-50: entered promiscuous mode
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.128 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.132 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.133 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef4b7dfc-50, col_values=(('external_ids', {'iface-id': 'a205b4a6-5de5-404a-ade1-c5cea3a5ab41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.135 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:33 compute-0 ovn_controller[98701]: 2026-02-24T16:10:33Z|00129|binding|INFO|Releasing lport a205b4a6-5de5-404a-ade1-c5cea3a5ab41 from this chassis (sb_readonly=0)
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.141 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.142 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.143 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.144 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[75ce5b66-b602-41aa-93f4-345abe8bc1a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.146 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: global
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a.pid.haproxy
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 16:10:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:33.149 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a', 'env', 'PROCESS_TAG=haproxy-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 16:10:33 compute-0 podman[254036]: 2026-02-24 16:10:33.542513271 +0000 UTC m=+0.074604263 container create a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.546 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:33 compute-0 systemd[1]: Started libpod-conmon-a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7.scope.
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.579 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949433.5786343, 9782c01d-ae8a-45f3-8949-f89d691eba6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.579 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] VM Started (Lifecycle Event)
Feb 24 16:10:33 compute-0 podman[254036]: 2026-02-24 16:10:33.498944817 +0000 UTC m=+0.031035829 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.604 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:33 compute-0 systemd[1]: Started libcrun container.
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.610 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949433.5787306, 9782c01d-ae8a-45f3-8949-f89d691eba6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.610 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] VM Paused (Lifecycle Event)
Feb 24 16:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394e2458868216a85aa9c692a64faa7b1ac673d72c6258946b44a74835d52618/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.628 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.632 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:10:33 compute-0 podman[254036]: 2026-02-24 16:10:33.644420649 +0000 UTC m=+0.176511671 container init a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:10:33 compute-0 podman[254036]: 2026-02-24 16:10:33.651432812 +0000 UTC m=+0.183523804 container start a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.664 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:10:33 compute-0 neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a[254057]: [NOTICE]   (254061) : New worker (254063) forked
Feb 24 16:10:33 compute-0 neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a[254057]: [NOTICE]   (254061) : Loading success.
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.918 188707 DEBUG nova.compute.manager [req-dea18e78-1bbf-465c-bb40-a83e6718f440 req-bbae793f-d712-4dc0-bde9-fbf5bd0713a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received event network-vif-plugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.918 188707 DEBUG oslo_concurrency.lockutils [req-dea18e78-1bbf-465c-bb40-a83e6718f440 req-bbae793f-d712-4dc0-bde9-fbf5bd0713a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.919 188707 DEBUG oslo_concurrency.lockutils [req-dea18e78-1bbf-465c-bb40-a83e6718f440 req-bbae793f-d712-4dc0-bde9-fbf5bd0713a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.919 188707 DEBUG oslo_concurrency.lockutils [req-dea18e78-1bbf-465c-bb40-a83e6718f440 req-bbae793f-d712-4dc0-bde9-fbf5bd0713a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.919 188707 DEBUG nova.compute.manager [req-dea18e78-1bbf-465c-bb40-a83e6718f440 req-bbae793f-d712-4dc0-bde9-fbf5bd0713a8 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Processing event network-vif-plugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.920 188707 DEBUG nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.924 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949433.9244924, 9782c01d-ae8a-45f3-8949-f89d691eba6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.924 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] VM Resumed (Lifecycle Event)
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.932 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.938 188707 INFO nova.virt.libvirt.driver [-] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Instance spawned successfully.
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.939 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.956 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.963 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.967 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.967 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.967 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.968 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.968 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.968 188707 DEBUG nova.virt.libvirt.driver [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:33 compute-0 nova_compute[188703]: 2026-02-24 16:10:33.996 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:10:34 compute-0 nova_compute[188703]: 2026-02-24 16:10:34.073 188707 INFO nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Took 8.40 seconds to spawn the instance on the hypervisor.
Feb 24 16:10:34 compute-0 nova_compute[188703]: 2026-02-24 16:10:34.073 188707 DEBUG nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:34 compute-0 nova_compute[188703]: 2026-02-24 16:10:34.156 188707 INFO nova.compute.manager [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Took 8.96 seconds to build instance.
Feb 24 16:10:34 compute-0 nova_compute[188703]: 2026-02-24 16:10:34.171 188707 DEBUG oslo_concurrency.lockutils [None req-036ebeff-5302-49b3-9f1f-c440e8787097 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:34 compute-0 nova_compute[188703]: 2026-02-24 16:10:34.899 188707 DEBUG nova.network.neutron [req-b76da31b-d907-434d-81e9-70a2933f5455 req-03ac8dd8-daee-4d23-96a5-2592b36c3ca0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Updated VIF entry in instance network info cache for port da05bd1b-9982-49c8-81dd-f58a5c0d1345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:10:34 compute-0 nova_compute[188703]: 2026-02-24 16:10:34.900 188707 DEBUG nova.network.neutron [req-b76da31b-d907-434d-81e9-70a2933f5455 req-03ac8dd8-daee-4d23-96a5-2592b36c3ca0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Updating instance_info_cache with network_info: [{"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:34 compute-0 nova_compute[188703]: 2026-02-24 16:10:34.922 188707 DEBUG oslo_concurrency.lockutils [req-b76da31b-d907-434d-81e9-70a2933f5455 req-03ac8dd8-daee-4d23-96a5-2592b36c3ca0 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-9782c01d-ae8a-45f3-8949-f89d691eba6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:35 compute-0 ovn_controller[98701]: 2026-02-24T16:10:35Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:a3:60 10.100.2.165
Feb 24 16:10:35 compute-0 ovn_controller[98701]: 2026-02-24T16:10:35Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:a3:60 10.100.2.165
Feb 24 16:10:35 compute-0 podman[254072]: 2026-02-24 16:10:35.144460089 +0000 UTC m=+0.092567450 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 24 16:10:35 compute-0 podman[254073]: 2026-02-24 16:10:35.194006309 +0000 UTC m=+0.142105280 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Feb 24 16:10:36 compute-0 nova_compute[188703]: 2026-02-24 16:10:36.041 188707 DEBUG nova.compute.manager [req-b200c9d3-e2c3-492e-9efb-495ebff1111a req-59ff1a7a-e84d-4264-9266-b15d0439f713 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received event network-vif-plugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:36 compute-0 nova_compute[188703]: 2026-02-24 16:10:36.041 188707 DEBUG oslo_concurrency.lockutils [req-b200c9d3-e2c3-492e-9efb-495ebff1111a req-59ff1a7a-e84d-4264-9266-b15d0439f713 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:36 compute-0 nova_compute[188703]: 2026-02-24 16:10:36.041 188707 DEBUG oslo_concurrency.lockutils [req-b200c9d3-e2c3-492e-9efb-495ebff1111a req-59ff1a7a-e84d-4264-9266-b15d0439f713 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:36 compute-0 nova_compute[188703]: 2026-02-24 16:10:36.042 188707 DEBUG oslo_concurrency.lockutils [req-b200c9d3-e2c3-492e-9efb-495ebff1111a req-59ff1a7a-e84d-4264-9266-b15d0439f713 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:36 compute-0 nova_compute[188703]: 2026-02-24 16:10:36.042 188707 DEBUG nova.compute.manager [req-b200c9d3-e2c3-492e-9efb-495ebff1111a req-59ff1a7a-e84d-4264-9266-b15d0439f713 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] No waiting events found dispatching network-vif-plugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:10:36 compute-0 nova_compute[188703]: 2026-02-24 16:10:36.042 188707 WARNING nova.compute.manager [req-b200c9d3-e2c3-492e-9efb-495ebff1111a req-59ff1a7a-e84d-4264-9266-b15d0439f713 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received unexpected event network-vif-plugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 for instance with vm_state active and task_state None.
Feb 24 16:10:36 compute-0 nova_compute[188703]: 2026-02-24 16:10:36.922 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:38 compute-0 nova_compute[188703]: 2026-02-24 16:10:38.142 188707 DEBUG nova.compute.manager [req-d83b5a93-ff75-4534-a2e2-94142e0c5279 req-d8917f7b-f14a-4de3-9dbc-cc9c613b1f1d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received event network-changed-89307b57-fe85-45b9-b123-781c385e8fec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:38 compute-0 nova_compute[188703]: 2026-02-24 16:10:38.142 188707 DEBUG nova.compute.manager [req-d83b5a93-ff75-4534-a2e2-94142e0c5279 req-d8917f7b-f14a-4de3-9dbc-cc9c613b1f1d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Refreshing instance network info cache due to event network-changed-89307b57-fe85-45b9-b123-781c385e8fec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:10:38 compute-0 nova_compute[188703]: 2026-02-24 16:10:38.143 188707 DEBUG oslo_concurrency.lockutils [req-d83b5a93-ff75-4534-a2e2-94142e0c5279 req-d8917f7b-f14a-4de3-9dbc-cc9c613b1f1d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:38 compute-0 nova_compute[188703]: 2026-02-24 16:10:38.143 188707 DEBUG oslo_concurrency.lockutils [req-d83b5a93-ff75-4534-a2e2-94142e0c5279 req-d8917f7b-f14a-4de3-9dbc-cc9c613b1f1d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:38 compute-0 nova_compute[188703]: 2026-02-24 16:10:38.143 188707 DEBUG nova.network.neutron [req-d83b5a93-ff75-4534-a2e2-94142e0c5279 req-d8917f7b-f14a-4de3-9dbc-cc9c613b1f1d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Refreshing network info cache for port 89307b57-fe85-45b9-b123-781c385e8fec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:10:38 compute-0 nova_compute[188703]: 2026-02-24 16:10:38.550 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:39 compute-0 nova_compute[188703]: 2026-02-24 16:10:39.733 188707 DEBUG nova.compute.manager [req-2229795c-dee6-459c-8769-7d9dc3513987 req-70882000-9c6f-49e7-b2be-290798cf5fc3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received event network-changed-da05bd1b-9982-49c8-81dd-f58a5c0d1345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:39 compute-0 nova_compute[188703]: 2026-02-24 16:10:39.733 188707 DEBUG nova.compute.manager [req-2229795c-dee6-459c-8769-7d9dc3513987 req-70882000-9c6f-49e7-b2be-290798cf5fc3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Refreshing instance network info cache due to event network-changed-da05bd1b-9982-49c8-81dd-f58a5c0d1345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:10:39 compute-0 nova_compute[188703]: 2026-02-24 16:10:39.734 188707 DEBUG oslo_concurrency.lockutils [req-2229795c-dee6-459c-8769-7d9dc3513987 req-70882000-9c6f-49e7-b2be-290798cf5fc3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-9782c01d-ae8a-45f3-8949-f89d691eba6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:39 compute-0 nova_compute[188703]: 2026-02-24 16:10:39.734 188707 DEBUG oslo_concurrency.lockutils [req-2229795c-dee6-459c-8769-7d9dc3513987 req-70882000-9c6f-49e7-b2be-290798cf5fc3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-9782c01d-ae8a-45f3-8949-f89d691eba6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:39 compute-0 nova_compute[188703]: 2026-02-24 16:10:39.734 188707 DEBUG nova.network.neutron [req-2229795c-dee6-459c-8769-7d9dc3513987 req-70882000-9c6f-49e7-b2be-290798cf5fc3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Refreshing network info cache for port da05bd1b-9982-49c8-81dd-f58a5c0d1345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.181 188707 DEBUG nova.network.neutron [req-d83b5a93-ff75-4534-a2e2-94142e0c5279 req-d8917f7b-f14a-4de3-9dbc-cc9c613b1f1d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updated VIF entry in instance network info cache for port 89307b57-fe85-45b9-b123-781c385e8fec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.182 188707 DEBUG nova.network.neutron [req-d83b5a93-ff75-4534-a2e2-94142e0c5279 req-d8917f7b-f14a-4de3-9dbc-cc9c613b1f1d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updating instance_info_cache with network_info: [{"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.221 188707 DEBUG oslo_concurrency.lockutils [req-d83b5a93-ff75-4534-a2e2-94142e0c5279 req-d8917f7b-f14a-4de3-9dbc-cc9c613b1f1d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.720 188707 DEBUG oslo_concurrency.lockutils [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "9782c01d-ae8a-45f3-8949-f89d691eba6f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.720 188707 DEBUG oslo_concurrency.lockutils [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.720 188707 DEBUG oslo_concurrency.lockutils [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.721 188707 DEBUG oslo_concurrency.lockutils [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.721 188707 DEBUG oslo_concurrency.lockutils [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.722 188707 INFO nova.compute.manager [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Terminating instance
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.723 188707 DEBUG nova.compute.manager [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:10:40 compute-0 kernel: tapda05bd1b-99 (unregistering): left promiscuous mode
Feb 24 16:10:40 compute-0 NetworkManager[56995]: <info>  [1771949440.7542] device (tapda05bd1b-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.763 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:40 compute-0 ovn_controller[98701]: 2026-02-24T16:10:40Z|00130|binding|INFO|Releasing lport da05bd1b-9982-49c8-81dd-f58a5c0d1345 from this chassis (sb_readonly=0)
Feb 24 16:10:40 compute-0 ovn_controller[98701]: 2026-02-24T16:10:40Z|00131|binding|INFO|Setting lport da05bd1b-9982-49c8-81dd-f58a5c0d1345 down in Southbound
Feb 24 16:10:40 compute-0 ovn_controller[98701]: 2026-02-24T16:10:40Z|00132|binding|INFO|Removing iface tapda05bd1b-99 ovn-installed in OVS
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.775 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.780 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:40.787 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:bc:a6 10.100.0.10'], port_security=['fa:16:3e:d6:bc:a6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9782c01d-ae8a-45f3-8949-f89d691eba6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6ab403a0b36466fb01268fa52ba862e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4560abf-06f9-4335-9755-51efb753bd43', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a94e8151-aa5c-447d-9a29-27f3b61e8a71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=da05bd1b-9982-49c8-81dd-f58a5c0d1345) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:10:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:40.789 108026 INFO neutron.agent.ovn.metadata.agent [-] Port da05bd1b-9982-49c8-81dd-f58a5c0d1345 in datapath ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a unbound from our chassis
Feb 24 16:10:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:40.794 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:10:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:40.796 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[997719ea-29b8-4f5c-868f-6e8d43783c8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:40.797 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a namespace which is not needed anymore
Feb 24 16:10:40 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Feb 24 16:10:40 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 7.650s CPU time.
Feb 24 16:10:40 compute-0 systemd-machined[158049]: Machine qemu-12-instance-0000000c terminated.
Feb 24 16:10:40 compute-0 podman[254114]: 2026-02-24 16:10:40.864889059 +0000 UTC m=+0.095026119 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:10:40 compute-0 neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a[254057]: [NOTICE]   (254061) : haproxy version is 2.8.14-c23fe91
Feb 24 16:10:40 compute-0 neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a[254057]: [NOTICE]   (254061) : path to executable is /usr/sbin/haproxy
Feb 24 16:10:40 compute-0 neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a[254057]: [WARNING]  (254061) : Exiting Master process...
Feb 24 16:10:40 compute-0 neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a[254057]: [WARNING]  (254061) : Exiting Master process...
Feb 24 16:10:40 compute-0 neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a[254057]: [ALERT]    (254061) : Current worker (254063) exited with code 143 (Terminated)
Feb 24 16:10:40 compute-0 neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a[254057]: [WARNING]  (254061) : All workers exited. Exiting... (0)
Feb 24 16:10:40 compute-0 systemd[1]: libpod-a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7.scope: Deactivated successfully.
Feb 24 16:10:40 compute-0 podman[254160]: 2026-02-24 16:10:40.936298653 +0000 UTC m=+0.049071838 container died a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:10:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7-userdata-shm.mount: Deactivated successfully.
Feb 24 16:10:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-394e2458868216a85aa9c692a64faa7b1ac673d72c6258946b44a74835d52618-merged.mount: Deactivated successfully.
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.984 188707 INFO nova.virt.libvirt.driver [-] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Instance destroyed successfully.
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.984 188707 DEBUG nova.objects.instance [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lazy-loading 'resources' on Instance uuid 9782c01d-ae8a-45f3-8949-f89d691eba6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:10:40 compute-0 podman[254160]: 2026-02-24 16:10:40.986725817 +0000 UTC m=+0.099499002 container cleanup a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.998 188707 DEBUG nova.virt.libvirt.vif [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-983728058',display_name='tempest-ServersTestManualDisk-server-983728058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-983728058',id=12,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOvgey79FSzQjb7HrD3TkBXwwWs6nGsxObUPD+UMS+EVqQThgiBgNRggK3+5NTLKC76Ef5hE+aThJqgnVTB6sqHgRMj4kt5hLD50PIsyX1PwmijWa6hphJLw3rmhCMAuQ==',key_name='tempest-keypair-592323468',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6ab403a0b36466fb01268fa52ba862e',ramdisk_id='',reservation_id='r-e9lq02kz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1363247561',owner_user_name='tempest-ServersTestManualDisk-1363247561-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='83d32289882b4c908edcdcb01b704bef',uuid=9782c01d-ae8a-45f3-8949-f89d691eba6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:10:40 compute-0 nova_compute[188703]: 2026-02-24 16:10:40.999 188707 DEBUG nova.network.os_vif_util [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Converting VIF {"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.000 188707 DEBUG nova.network.os_vif_util [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:bc:a6,bridge_name='br-int',has_traffic_filtering=True,id=da05bd1b-9982-49c8-81dd-f58a5c0d1345,network=Network(ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda05bd1b-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.000 188707 DEBUG os_vif [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:bc:a6,bridge_name='br-int',has_traffic_filtering=True,id=da05bd1b-9982-49c8-81dd-f58a5c0d1345,network=Network(ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda05bd1b-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:10:41 compute-0 systemd[1]: libpod-conmon-a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7.scope: Deactivated successfully.
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.003 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.004 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda05bd1b-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.006 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.009 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.012 188707 INFO os_vif [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:bc:a6,bridge_name='br-int',has_traffic_filtering=True,id=da05bd1b-9982-49c8-81dd-f58a5c0d1345,network=Network(ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda05bd1b-99')
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.013 188707 INFO nova.virt.libvirt.driver [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Deleting instance files /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f_del
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.013 188707 INFO nova.virt.libvirt.driver [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Deletion of /var/lib/nova/instances/9782c01d-ae8a-45f3-8949-f89d691eba6f_del complete
Feb 24 16:10:41 compute-0 podman[254207]: 2026-02-24 16:10:41.053614977 +0000 UTC m=+0.042601810 container remove a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:10:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:41.059 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef0c774-39e2-445f-b161-d5c39e2cea54]: (4, ('Tue Feb 24 04:10:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a (a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7)\na7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7\nTue Feb 24 04:10:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a (a7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7)\na7d220f186024b0a27a52f593a2c8ee1040c475f5f0f36f92873d68b494cdfc7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:41.060 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[79e30ae4-4d2e-43b4-85e7-ddc050732a5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:41.062 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef4b7dfc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:41 compute-0 kernel: tapef4b7dfc-50: left promiscuous mode
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.077 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:41.079 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[aaddfde5-7e0a-4c96-ae87-64895891e6f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.085 188707 INFO nova.compute.manager [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Took 0.36 seconds to destroy the instance on the hypervisor.
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.086 188707 DEBUG oslo.service.loopingcall [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.086 188707 DEBUG nova.compute.manager [-] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.087 188707 DEBUG nova.network.neutron [-] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:10:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:41.103 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[6730ed7a-c033-4e8d-a520-1fce861655b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:41.107 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[050c0825-ee36-4097-a927-43b18b746462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:41.124 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c717a1-3dfc-474f-80ca-0bf3e0fc21e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514544, 'reachable_time': 32775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254222, 'error': None, 'target': 'ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:41.127 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:10:41 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:41.127 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[320cb9a6-0c99-49a1-8195-5fa269403060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:41 compute-0 systemd[1]: run-netns-ovnmeta\x2def4b7dfc\x2d50b5\x2d4219\x2db5ab\x2ddd54d50ffa6a.mount: Deactivated successfully.
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.572 188707 DEBUG nova.network.neutron [req-2229795c-dee6-459c-8769-7d9dc3513987 req-70882000-9c6f-49e7-b2be-290798cf5fc3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Updated VIF entry in instance network info cache for port da05bd1b-9982-49c8-81dd-f58a5c0d1345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.573 188707 DEBUG nova.network.neutron [req-2229795c-dee6-459c-8769-7d9dc3513987 req-70882000-9c6f-49e7-b2be-290798cf5fc3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Updating instance_info_cache with network_info: [{"id": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "address": "fa:16:3e:d6:bc:a6", "network": {"id": "ef4b7dfc-50b5-4219-b5ab-dd54d50ffa6a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-777488091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ab403a0b36466fb01268fa52ba862e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda05bd1b-99", "ovs_interfaceid": "da05bd1b-9982-49c8-81dd-f58a5c0d1345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.592 188707 DEBUG oslo_concurrency.lockutils [req-2229795c-dee6-459c-8769-7d9dc3513987 req-70882000-9c6f-49e7-b2be-290798cf5fc3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-9782c01d-ae8a-45f3-8949-f89d691eba6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.833 188707 DEBUG nova.compute.manager [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received event network-vif-unplugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.836 188707 DEBUG oslo_concurrency.lockutils [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.837 188707 DEBUG oslo_concurrency.lockutils [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.838 188707 DEBUG oslo_concurrency.lockutils [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.838 188707 DEBUG nova.compute.manager [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] No waiting events found dispatching network-vif-unplugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.839 188707 DEBUG nova.compute.manager [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received event network-vif-unplugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.839 188707 DEBUG nova.compute.manager [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received event network-vif-plugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.840 188707 DEBUG oslo_concurrency.lockutils [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.841 188707 DEBUG oslo_concurrency.lockutils [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.842 188707 DEBUG oslo_concurrency.lockutils [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.843 188707 DEBUG nova.compute.manager [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] No waiting events found dispatching network-vif-plugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:10:41 compute-0 nova_compute[188703]: 2026-02-24 16:10:41.843 188707 WARNING nova.compute.manager [req-8a0fd81e-8001-49eb-85e8-d9f3eac88bf2 req-e0b3b883-1b0b-4418-b10a-1b500c012d01 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received unexpected event network-vif-plugged-da05bd1b-9982-49c8-81dd-f58a5c0d1345 for instance with vm_state active and task_state deleting.
Feb 24 16:10:42 compute-0 nova_compute[188703]: 2026-02-24 16:10:42.424 188707 DEBUG nova.network.neutron [-] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:42 compute-0 nova_compute[188703]: 2026-02-24 16:10:42.451 188707 INFO nova.compute.manager [-] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Took 1.36 seconds to deallocate network for instance.
Feb 24 16:10:42 compute-0 nova_compute[188703]: 2026-02-24 16:10:42.506 188707 DEBUG oslo_concurrency.lockutils [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:42 compute-0 nova_compute[188703]: 2026-02-24 16:10:42.506 188707 DEBUG oslo_concurrency.lockutils [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:42 compute-0 nova_compute[188703]: 2026-02-24 16:10:42.882 188707 DEBUG nova.compute.provider_tree [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:10:42 compute-0 nova_compute[188703]: 2026-02-24 16:10:42.907 188707 DEBUG nova.scheduler.client.report [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:10:42 compute-0 nova_compute[188703]: 2026-02-24 16:10:42.933 188707 DEBUG oslo_concurrency.lockutils [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:42 compute-0 nova_compute[188703]: 2026-02-24 16:10:42.968 188707 INFO nova.scheduler.client.report [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Deleted allocations for instance 9782c01d-ae8a-45f3-8949-f89d691eba6f
Feb 24 16:10:43 compute-0 nova_compute[188703]: 2026-02-24 16:10:43.037 188707 DEBUG oslo_concurrency.lockutils [None req-b80ab567-f77d-44b3-a7d1-d14ccb6bedb3 83d32289882b4c908edcdcb01b704bef e6ab403a0b36466fb01268fa52ba862e - - default default] Lock "9782c01d-ae8a-45f3-8949-f89d691eba6f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:43 compute-0 nova_compute[188703]: 2026-02-24 16:10:43.554 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.009 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.077 188707 DEBUG nova.compute.manager [req-dd621216-460a-474b-8fdd-76f85adce553 req-0f07fbe8-b661-420c-8503-ff43a9c37cae 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Received event network-vif-deleted-da05bd1b-9982-49c8-81dd-f58a5c0d1345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.301 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.302 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.327 188707 DEBUG nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.404 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.404 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.416 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.417 188707 INFO nova.compute.claims [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.588 188707 DEBUG nova.compute.provider_tree [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.608 188707 DEBUG nova.scheduler.client.report [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.635 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.637 188707 DEBUG nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.683 188707 DEBUG nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.684 188707 DEBUG nova.network.neutron [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.704 188707 INFO nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.727 188707 DEBUG nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.761 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "4b3df01c-0b2d-42c4-90f7-59c995377765" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.762 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.785 188707 DEBUG nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.858 188707 DEBUG nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.859 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.860 188707 INFO nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Creating image(s)
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.860 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "/var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.861 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "/var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.861 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "/var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.875 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.891 188707 DEBUG nova.policy [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7cec00195bca4d15bbb0449e21faedcf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d42735c7eb84888b6c3dca096466e04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.895 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.896 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.903 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.904 188707 INFO nova.compute.claims [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.950 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.951 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.952 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:46 compute-0 nova_compute[188703]: 2026-02-24 16:10:46.964 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.022 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.024 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.074 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.075 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.076 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.145 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.146 188707 DEBUG nova.virt.disk.api [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Checking if we can resize image /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.147 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.174 188707 DEBUG nova.compute.provider_tree [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.200 188707 DEBUG nova.scheduler.client.report [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.211 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.213 188707 DEBUG nova.virt.disk.api [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Cannot resize image /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.213 188707 DEBUG nova.objects.instance [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lazy-loading 'migration_context' on Instance uuid 511a6d08-b421-4fcb-bb1a-13d6ee450a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.231 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.232 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Ensure instance console log exists: /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.233 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.234 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.234 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.237 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.239 188707 DEBUG nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.313 188707 DEBUG nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.313 188707 DEBUG nova.network.neutron [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.344 188707 INFO nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.380 188707 DEBUG nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.520 188707 DEBUG nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.521 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.522 188707 INFO nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Creating image(s)
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.522 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "/var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.523 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "/var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.524 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "/var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.540 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.573 188707 DEBUG nova.policy [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5cceae9386a64ff6b1ff736d2a86285f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a652d479d5204330b31c0f67ffd65a20', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.594 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.595 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "c13b49024b5494b3a1c7152ba68db7875bd84683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.596 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.613 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.667 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.669 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.716 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683,backing_fmt=raw /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.717 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "c13b49024b5494b3a1c7152ba68db7875bd84683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.718 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.770 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.772 188707 DEBUG nova.virt.disk.api [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Checking if we can resize image /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.773 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.827 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.828 188707 DEBUG nova.virt.disk.api [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Cannot resize image /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.829 188707 DEBUG nova.objects.instance [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lazy-loading 'migration_context' on Instance uuid 4b3df01c-0b2d-42c4-90f7-59c995377765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.851 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.851 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Ensure instance console log exists: /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.851 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.852 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:47 compute-0 nova_compute[188703]: 2026-02-24 16:10:47.852 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:48 compute-0 nova_compute[188703]: 2026-02-24 16:10:48.557 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:48 compute-0 nova_compute[188703]: 2026-02-24 16:10:48.883 188707 DEBUG nova.network.neutron [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Successfully created port: 308a4c02-df27-44f7-8630-7adbfdb9e316 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:10:49 compute-0 podman[254256]: 2026-02-24 16:10:49.111296823 +0000 UTC m=+0.068313610 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 16:10:49 compute-0 podman[254257]: 2026-02-24 16:10:49.132443097 +0000 UTC m=+0.089702981 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 24 16:10:49 compute-0 nova_compute[188703]: 2026-02-24 16:10:49.221 188707 DEBUG nova.network.neutron [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Successfully created port: 540b66b9-f088-4e4e-bc6a-2c20cee24320 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:10:49 compute-0 sshd-session[254299]: Connection closed by authenticating user root 64.236.161.24 port 46112 [preauth]
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.089 188707 DEBUG nova.network.neutron [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Successfully updated port: 308a4c02-df27-44f7-8630-7adbfdb9e316 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.114 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "refresh_cache-511a6d08-b421-4fcb-bb1a-13d6ee450a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.115 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquired lock "refresh_cache-511a6d08-b421-4fcb-bb1a-13d6ee450a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.116 188707 DEBUG nova.network.neutron [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.204 188707 DEBUG nova.compute.manager [req-f278cd84-84c0-4d65-bc7b-82eb14535e08 req-159d008b-a40e-48c9-8765-3095f6faab44 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received event network-changed-308a4c02-df27-44f7-8630-7adbfdb9e316 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.205 188707 DEBUG nova.compute.manager [req-f278cd84-84c0-4d65-bc7b-82eb14535e08 req-159d008b-a40e-48c9-8765-3095f6faab44 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Refreshing instance network info cache due to event network-changed-308a4c02-df27-44f7-8630-7adbfdb9e316. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.205 188707 DEBUG oslo_concurrency.lockutils [req-f278cd84-84c0-4d65-bc7b-82eb14535e08 req-159d008b-a40e-48c9-8765-3095f6faab44 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-511a6d08-b421-4fcb-bb1a-13d6ee450a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.317 188707 DEBUG nova.network.neutron [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.361 188707 DEBUG nova.network.neutron [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Successfully updated port: 540b66b9-f088-4e4e-bc6a-2c20cee24320 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.379 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "refresh_cache-4b3df01c-0b2d-42c4-90f7-59c995377765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.379 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquired lock "refresh_cache-4b3df01c-0b2d-42c4-90f7-59c995377765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.380 188707 DEBUG nova.network.neutron [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:10:50 compute-0 nova_compute[188703]: 2026-02-24 16:10:50.605 188707 DEBUG nova.network.neutron [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.012 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.720 188707 DEBUG nova.network.neutron [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Updating instance_info_cache with network_info: [{"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.741 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Releasing lock "refresh_cache-511a6d08-b421-4fcb-bb1a-13d6ee450a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.742 188707 DEBUG nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Instance network_info: |[{"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.743 188707 DEBUG oslo_concurrency.lockutils [req-f278cd84-84c0-4d65-bc7b-82eb14535e08 req-159d008b-a40e-48c9-8765-3095f6faab44 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-511a6d08-b421-4fcb-bb1a-13d6ee450a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.743 188707 DEBUG nova.network.neutron [req-f278cd84-84c0-4d65-bc7b-82eb14535e08 req-159d008b-a40e-48c9-8765-3095f6faab44 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Refreshing network info cache for port 308a4c02-df27-44f7-8630-7adbfdb9e316 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.746 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Start _get_guest_xml network_info=[{"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.753 188707 WARNING nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.760 188707 DEBUG nova.virt.libvirt.host [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.761 188707 DEBUG nova.virt.libvirt.host [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.766 188707 DEBUG nova.virt.libvirt.host [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.767 188707 DEBUG nova.virt.libvirt.host [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.768 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.769 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.769 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.770 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.770 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.771 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.771 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.777 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.778 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.778 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.779 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.779 188707 DEBUG nova.virt.hardware [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.784 188707 DEBUG nova.virt.libvirt.vif [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:10:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-364477853',display_name='tempest-TestNetworkBasicOps-server-364477853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-364477853',id=13,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLDhIGZ45cerZt16vXCE/2M+e0EvezYGSOKoMg7r1jolvwLCe7vqABOiTN3bC7vFpcsuQPOPQnsd5lAZ7mCFOwj9tIZ1SaXBoBDHrHg3VzhKyfqO/D4flGPVobB7p10FqQ==',key_name='tempest-TestNetworkBasicOps-1427864370',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d42735c7eb84888b6c3dca096466e04',ramdisk_id='',reservation_id='r-umhleyp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2112956786',owner_user_name='tempest-TestNetworkBasicOps-2112956786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:10:46Z,user_data=None,user_id='7cec00195bca4d15bbb0449e21faedcf',uuid=511a6d08-b421-4fcb-bb1a-13d6ee450a2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.785 188707 DEBUG nova.network.os_vif_util [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converting VIF {"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.786 188707 DEBUG nova.network.os_vif_util [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:bb:a8,bridge_name='br-int',has_traffic_filtering=True,id=308a4c02-df27-44f7-8630-7adbfdb9e316,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap308a4c02-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.787 188707 DEBUG nova.objects.instance [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 511a6d08-b421-4fcb-bb1a-13d6ee450a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.816 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <uuid>511a6d08-b421-4fcb-bb1a-13d6ee450a2d</uuid>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <name>instance-0000000d</name>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <nova:name>tempest-TestNetworkBasicOps-server-364477853</nova:name>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:10:51</nova:creationTime>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:10:51 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:10:51 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:10:51 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:10:51 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:10:51 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:10:51 compute-0 nova_compute[188703]:         <nova:user uuid="7cec00195bca4d15bbb0449e21faedcf">tempest-TestNetworkBasicOps-2112956786-project-member</nova:user>
Feb 24 16:10:51 compute-0 nova_compute[188703]:         <nova:project uuid="6d42735c7eb84888b6c3dca096466e04">tempest-TestNetworkBasicOps-2112956786</nova:project>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="ee41af80-6a60-4735-8135-3a06de2a36b2"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:10:51 compute-0 nova_compute[188703]:         <nova:port uuid="308a4c02-df27-44f7-8630-7adbfdb9e316">
Feb 24 16:10:51 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <system>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <entry name="serial">511a6d08-b421-4fcb-bb1a-13d6ee450a2d</entry>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <entry name="uuid">511a6d08-b421-4fcb-bb1a-13d6ee450a2d</entry>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     </system>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <os>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   </os>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <features>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   </features>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk.config"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:fb:bb:a8"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <target dev="tap308a4c02-df"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/console.log" append="off"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <video>
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     </video>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:10:51 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:10:51 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:10:51 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:10:51 compute-0 nova_compute[188703]: </domain>
Feb 24 16:10:51 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.825 188707 DEBUG nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Preparing to wait for external event network-vif-plugged-308a4c02-df27-44f7-8630-7adbfdb9e316 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.825 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.826 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.826 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.827 188707 DEBUG nova.virt.libvirt.vif [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:10:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-364477853',display_name='tempest-TestNetworkBasicOps-server-364477853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-364477853',id=13,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLDhIGZ45cerZt16vXCE/2M+e0EvezYGSOKoMg7r1jolvwLCe7vqABOiTN3bC7vFpcsuQPOPQnsd5lAZ7mCFOwj9tIZ1SaXBoBDHrHg3VzhKyfqO/D4flGPVobB7p10FqQ==',key_name='tempest-TestNetworkBasicOps-1427864370',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d42735c7eb84888b6c3dca096466e04',ramdisk_id='',reservation_id='r-umhleyp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2112956786',owner_user_name='tempest-TestNetworkBasicOps-2112956786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:10:46Z,user_data=None,user_id='7cec00195bca4d15bbb0449e21faedcf',uuid=511a6d08-b421-4fcb-bb1a-13d6ee450a2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.828 188707 DEBUG nova.network.os_vif_util [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converting VIF {"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.829 188707 DEBUG nova.network.os_vif_util [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:bb:a8,bridge_name='br-int',has_traffic_filtering=True,id=308a4c02-df27-44f7-8630-7adbfdb9e316,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap308a4c02-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.829 188707 DEBUG os_vif [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:bb:a8,bridge_name='br-int',has_traffic_filtering=True,id=308a4c02-df27-44f7-8630-7adbfdb9e316,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap308a4c02-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.830 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.831 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.831 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.835 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.835 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap308a4c02-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.836 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap308a4c02-df, col_values=(('external_ids', {'iface-id': '308a4c02-df27-44f7-8630-7adbfdb9e316', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:bb:a8', 'vm-uuid': '511a6d08-b421-4fcb-bb1a-13d6ee450a2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:51 compute-0 NetworkManager[56995]: <info>  [1771949451.8391] manager: (tap308a4c02-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.841 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.851 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.852 188707 INFO os_vif [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:bb:a8,bridge_name='br-int',has_traffic_filtering=True,id=308a4c02-df27-44f7-8630-7adbfdb9e316,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap308a4c02-df')
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.909 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.910 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.911 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] No VIF found with MAC fa:16:3e:fb:bb:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.911 188707 INFO nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Using config drive
Feb 24 16:10:51 compute-0 ovn_controller[98701]: 2026-02-24T16:10:51Z|00133|binding|INFO|Releasing lport d50b9c1e-a71e-49f6-a0bc-95207c7d9dc7 from this chassis (sb_readonly=0)
Feb 24 16:10:51 compute-0 ovn_controller[98701]: 2026-02-24T16:10:51Z|00134|binding|INFO|Releasing lport 0f982f60-a551-4bd9-8329-8decd220388f from this chassis (sb_readonly=0)
Feb 24 16:10:51 compute-0 ovn_controller[98701]: 2026-02-24T16:10:51Z|00135|binding|INFO|Releasing lport e6d03cb3-ba09-4724-83d3-edb05289054b from this chassis (sb_readonly=0)
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.956 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:51 compute-0 nova_compute[188703]: 2026-02-24 16:10:51.960 188707 DEBUG nova.network.neutron [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Updating instance_info_cache with network_info: [{"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.000 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Releasing lock "refresh_cache-4b3df01c-0b2d-42c4-90f7-59c995377765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.001 188707 DEBUG nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Instance network_info: |[{"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.005 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Start _get_guest_xml network_info=[{"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.024 188707 WARNING nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.033 188707 DEBUG nova.virt.libvirt.host [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.034 188707 DEBUG nova.virt.libvirt.host [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.039 188707 DEBUG nova.virt.libvirt.host [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.040 188707 DEBUG nova.virt.libvirt.host [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.041 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.041 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:07:14Z,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4407f5b870e145d8917119ad928717e8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:07:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.042 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.042 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.043 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.043 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.044 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.044 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.044 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.045 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.045 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.046 188707 DEBUG nova.virt.hardware [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.049 188707 DEBUG nova.virt.libvirt.vif [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:10:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1047622473',display_name='tempest-TestServerBasicOps-server-1047622473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1047622473',id=14,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO/x/xq3/i+dOKRq5Q/ez7atKEg1OiLx+Yu5sv7g3D5S0LGXdnA7hoDIuXmRoobrxZzJJNC/MIVEamcvk9li//tVyxf3Y1I4DrDrARu9O2/vUb0QGAUiNV4FNYG24pebJA==',key_name='tempest-TestServerBasicOps-1767552992',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a652d479d5204330b31c0f67ffd65a20',ramdisk_id='',reservation_id='r-5t78tkhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-598118548',owner_user_name='tempest-TestServerBasicOps-598118548-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:10:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5cceae9386a64ff6b1ff736d2a86285f',uuid=4b3df01c-0b2d-42c4-90f7-59c995377765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.050 188707 DEBUG nova.network.os_vif_util [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Converting VIF {"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.051 188707 DEBUG nova.network.os_vif_util [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:22,bridge_name='br-int',has_traffic_filtering=True,id=540b66b9-f088-4e4e-bc6a-2c20cee24320,network=Network(1a08fe3b-f918-47ba-ae63-ee103c7afcba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540b66b9-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.051 188707 DEBUG nova.objects.instance [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b3df01c-0b2d-42c4-90f7-59c995377765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.069 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <uuid>4b3df01c-0b2d-42c4-90f7-59c995377765</uuid>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <name>instance-0000000e</name>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <nova:name>tempest-TestServerBasicOps-server-1047622473</nova:name>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:10:52</nova:creationTime>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:10:52 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:10:52 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:10:52 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:10:52 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:10:52 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:10:52 compute-0 nova_compute[188703]:         <nova:user uuid="5cceae9386a64ff6b1ff736d2a86285f">tempest-TestServerBasicOps-598118548-project-member</nova:user>
Feb 24 16:10:52 compute-0 nova_compute[188703]:         <nova:project uuid="a652d479d5204330b31c0f67ffd65a20">tempest-TestServerBasicOps-598118548</nova:project>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="ee41af80-6a60-4735-8135-3a06de2a36b2"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:10:52 compute-0 nova_compute[188703]:         <nova:port uuid="540b66b9-f088-4e4e-bc6a-2c20cee24320">
Feb 24 16:10:52 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <system>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <entry name="serial">4b3df01c-0b2d-42c4-90f7-59c995377765</entry>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <entry name="uuid">4b3df01c-0b2d-42c4-90f7-59c995377765</entry>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     </system>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <os>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   </os>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <features>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   </features>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk.config"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:10:46:22"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <target dev="tap540b66b9-f0"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/console.log" append="off"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <video>
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     </video>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:10:52 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:10:52 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:10:52 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:10:52 compute-0 nova_compute[188703]: </domain>
Feb 24 16:10:52 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.078 188707 DEBUG nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Preparing to wait for external event network-vif-plugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.078 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.078 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.078 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.079 188707 DEBUG nova.virt.libvirt.vif [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:10:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1047622473',display_name='tempest-TestServerBasicOps-server-1047622473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1047622473',id=14,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO/x/xq3/i+dOKRq5Q/ez7atKEg1OiLx+Yu5sv7g3D5S0LGXdnA7hoDIuXmRoobrxZzJJNC/MIVEamcvk9li//tVyxf3Y1I4DrDrARu9O2/vUb0QGAUiNV4FNYG24pebJA==',key_name='tempest-TestServerBasicOps-1767552992',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a652d479d5204330b31c0f67ffd65a20',ramdisk_id='',reservation_id='r-5t78tkhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-598118548',owner_user_name='tempest-TestServerBasicOps-598118548-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:10:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5cceae9386a64ff6b1ff736d2a86285f',uuid=4b3df01c-0b2d-42c4-90f7-59c995377765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.079 188707 DEBUG nova.network.os_vif_util [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Converting VIF {"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.080 188707 DEBUG nova.network.os_vif_util [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:22,bridge_name='br-int',has_traffic_filtering=True,id=540b66b9-f088-4e4e-bc6a-2c20cee24320,network=Network(1a08fe3b-f918-47ba-ae63-ee103c7afcba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540b66b9-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.080 188707 DEBUG os_vif [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:22,bridge_name='br-int',has_traffic_filtering=True,id=540b66b9-f088-4e4e-bc6a-2c20cee24320,network=Network(1a08fe3b-f918-47ba-ae63-ee103c7afcba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540b66b9-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.080 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.081 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.081 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.085 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.085 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap540b66b9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.086 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap540b66b9-f0, col_values=(('external_ids', {'iface-id': '540b66b9-f088-4e4e-bc6a-2c20cee24320', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:46:22', 'vm-uuid': '4b3df01c-0b2d-42c4-90f7-59c995377765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:52 compute-0 NetworkManager[56995]: <info>  [1771949452.0889] manager: (tap540b66b9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.089 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.097 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.098 188707 INFO os_vif [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:22,bridge_name='br-int',has_traffic_filtering=True,id=540b66b9-f088-4e4e-bc6a-2c20cee24320,network=Network(1a08fe3b-f918-47ba-ae63-ee103c7afcba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540b66b9-f0')
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.144 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.145 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.145 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] No VIF found with MAC fa:16:3e:10:46:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.146 188707 INFO nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Using config drive
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.308 188707 DEBUG nova.compute.manager [req-33113cd3-58a8-4e67-8520-b3b3c815352d req-43b80b6f-47d9-4976-9f77-6a5e1c0027f3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received event network-changed-540b66b9-f088-4e4e-bc6a-2c20cee24320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.309 188707 DEBUG nova.compute.manager [req-33113cd3-58a8-4e67-8520-b3b3c815352d req-43b80b6f-47d9-4976-9f77-6a5e1c0027f3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Refreshing instance network info cache due to event network-changed-540b66b9-f088-4e4e-bc6a-2c20cee24320. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.310 188707 DEBUG oslo_concurrency.lockutils [req-33113cd3-58a8-4e67-8520-b3b3c815352d req-43b80b6f-47d9-4976-9f77-6a5e1c0027f3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-4b3df01c-0b2d-42c4-90f7-59c995377765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.310 188707 DEBUG oslo_concurrency.lockutils [req-33113cd3-58a8-4e67-8520-b3b3c815352d req-43b80b6f-47d9-4976-9f77-6a5e1c0027f3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-4b3df01c-0b2d-42c4-90f7-59c995377765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.310 188707 DEBUG nova.network.neutron [req-33113cd3-58a8-4e67-8520-b3b3c815352d req-43b80b6f-47d9-4976-9f77-6a5e1c0027f3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Refreshing network info cache for port 540b66b9-f088-4e4e-bc6a-2c20cee24320 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.419 188707 INFO nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Creating config drive at /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk.config
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.426 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprektqnaz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.552 188707 DEBUG oslo_concurrency.processutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprektqnaz" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.585 188707 INFO nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Creating config drive at /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk.config
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.591 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnvd2c8sv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:10:52 compute-0 kernel: tap308a4c02-df: entered promiscuous mode
Feb 24 16:10:52 compute-0 NetworkManager[56995]: <info>  [1771949452.6216] manager: (tap308a4c02-df): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.626 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 ovn_controller[98701]: 2026-02-24T16:10:52Z|00136|binding|INFO|Claiming lport 308a4c02-df27-44f7-8630-7adbfdb9e316 for this chassis.
Feb 24 16:10:52 compute-0 ovn_controller[98701]: 2026-02-24T16:10:52Z|00137|binding|INFO|308a4c02-df27-44f7-8630-7adbfdb9e316: Claiming fa:16:3e:fb:bb:a8 10.100.0.7
Feb 24 16:10:52 compute-0 systemd-udevd[254325]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.637 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:bb:a8 10.100.0.7'], port_security=['fa:16:3e:fb:bb:a8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '511a6d08-b421-4fcb-bb1a-13d6ee450a2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d42735c7eb84888b6c3dca096466e04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd295f37d-3220-4591-855f-7d991af78faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2db2ff8a-782e-4e32-b2de-a44ea0ff97e9, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=308a4c02-df27-44f7-8630-7adbfdb9e316) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.638 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 308a4c02-df27-44f7-8630-7adbfdb9e316 in datapath aeadce2d-53c4-4727-bbc6-e1191df0ffea bound to our chassis
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.640 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aeadce2d-53c4-4727-bbc6-e1191df0ffea
Feb 24 16:10:52 compute-0 NetworkManager[56995]: <info>  [1771949452.6464] device (tap308a4c02-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:10:52 compute-0 NetworkManager[56995]: <info>  [1771949452.6517] device (tap308a4c02-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.655 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 ovn_controller[98701]: 2026-02-24T16:10:52Z|00138|binding|INFO|Setting lport 308a4c02-df27-44f7-8630-7adbfdb9e316 up in Southbound
Feb 24 16:10:52 compute-0 ovn_controller[98701]: 2026-02-24T16:10:52Z|00139|binding|INFO|Setting lport 308a4c02-df27-44f7-8630-7adbfdb9e316 ovn-installed in OVS
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.658 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.660 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 systemd-machined[158049]: New machine qemu-13-instance-0000000d.
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.670 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[3f61e2fe-03c3-44ae-9352-5051661dbcd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.709 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[b87c523e-72d4-4604-a3d5-ac5562934890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.713 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee81d49-1d89-4851-a1c4-e7888a086376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.723 188707 DEBUG oslo_concurrency.processutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnvd2c8sv" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.743 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[fb25cf55-4edd-42da-9e96-a68df3cd2f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.763 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[14e3fc08-aa6a-411c-a965-6c4f85635989]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaeadce2d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:98:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509928, 'reachable_time': 34363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254346, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.778 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc7bb7b-4a9a-4ed0-8d99-d4d5149e021c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaeadce2d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509938, 'tstamp': 509938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254352, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaeadce2d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509940, 'tstamp': 509940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254352, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.780 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaeadce2d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.783 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 NetworkManager[56995]: <info>  [1771949452.7891] manager: (tap540b66b9-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Feb 24 16:10:52 compute-0 kernel: tap540b66b9-f0: entered promiscuous mode
Feb 24 16:10:52 compute-0 NetworkManager[56995]: <info>  [1771949452.8061] device (tap540b66b9-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.806 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaeadce2d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.806 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.807 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaeadce2d-50, col_values=(('external_ids', {'iface-id': 'e6d03cb3-ba09-4724-83d3-edb05289054b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.807 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.805 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 ovn_controller[98701]: 2026-02-24T16:10:52Z|00140|binding|INFO|Claiming lport 540b66b9-f088-4e4e-bc6a-2c20cee24320 for this chassis.
Feb 24 16:10:52 compute-0 NetworkManager[56995]: <info>  [1771949452.8111] device (tap540b66b9-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:10:52 compute-0 ovn_controller[98701]: 2026-02-24T16:10:52Z|00141|binding|INFO|540b66b9-f088-4e4e-bc6a-2c20cee24320: Claiming fa:16:3e:10:46:22 10.100.0.12
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.819 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:46:22 10.100.0.12'], port_security=['fa:16:3e:10:46:22 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4b3df01c-0b2d-42c4-90f7-59c995377765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a08fe3b-f918-47ba-ae63-ee103c7afcba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a652d479d5204330b31c0f67ffd65a20', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44eef4cc-551f-41af-97e7-bdef67551494 c338f274-5f9a-4760-9c0e-39ecdb6b2b31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb4ba59-322c-4717-9949-bb4517b92bfb, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=540b66b9-f088-4e4e-bc6a-2c20cee24320) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.822 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.821 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 540b66b9-f088-4e4e-bc6a-2c20cee24320 in datapath 1a08fe3b-f918-47ba-ae63-ee103c7afcba bound to our chassis
Feb 24 16:10:52 compute-0 ovn_controller[98701]: 2026-02-24T16:10:52Z|00142|binding|INFO|Setting lport 540b66b9-f088-4e4e-bc6a-2c20cee24320 ovn-installed in OVS
Feb 24 16:10:52 compute-0 ovn_controller[98701]: 2026-02-24T16:10:52Z|00143|binding|INFO|Setting lport 540b66b9-f088-4e4e-bc6a-2c20cee24320 up in Southbound
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.827 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a08fe3b-f918-47ba-ae63-ee103c7afcba
Feb 24 16:10:52 compute-0 nova_compute[188703]: 2026-02-24 16:10:52.827 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:52 compute-0 systemd-machined[158049]: New machine qemu-14-instance-0000000e.
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.841 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[a75843b8-c1f6-4d50-933e-ad711a4d29f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.843 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1a08fe3b-f1 in ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.845 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1a08fe3b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.845 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7bb260-5aa9-492d-983f-bb71bd6fc48e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.849 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[53634194-ad9a-435c-bfff-16e58fea0ece]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.870 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9193bc-65fb-4781-9f3d-346641064b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.895 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[685c2508-cbe3-4afa-bd3d-5ec235fdfb0e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.919 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[beebb407-a993-4705-85d0-79da6d537672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 NetworkManager[56995]: <info>  [1771949452.9271] manager: (tap1a08fe3b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.926 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[209a6817-990f-4059-8bf7-6f41e9d2d57e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.966 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[6b02c889-6950-4088-8755-5df709de93e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.970 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[8120f4e3-a315-4313-8eed-050faa509caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:52 compute-0 NetworkManager[56995]: <info>  [1771949452.9892] device (tap1a08fe3b-f0): carrier: link connected
Feb 24 16:10:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:52.992 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[072b6877-9ddf-43f9-bb07-77d56012b5be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.006 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[65538f32-3e0c-4f57-a3bb-be8f4101da06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a08fe3b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:4b:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516549, 'reachable_time': 38026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254391, 'error': None, 'target': 'ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.020 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ab042803-a64f-40fd-8cc4-b04c2d16e05f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:4b10'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254392, 'error': None, 'target': 'ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.035 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[46a41554-3454-48b0-b7ba-c9b38af6e26b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a08fe3b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:4b:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516549, 'reachable_time': 38026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254393, 'error': None, 'target': 'ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.060 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8f2cfc-7802-434a-92ad-16caf39b53b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.110 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7d49a1-e682-4fc0-8707-2b139a3ed2cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.112 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a08fe3b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.113 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.113 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a08fe3b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:53 compute-0 kernel: tap1a08fe3b-f0: entered promiscuous mode
Feb 24 16:10:53 compute-0 NetworkManager[56995]: <info>  [1771949453.1163] manager: (tap1a08fe3b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.115 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.118 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.121 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a08fe3b-f0, col_values=(('external_ids', {'iface-id': 'f6294507-2873-4400-bee9-b9ef626c0371'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.123 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:53 compute-0 ovn_controller[98701]: 2026-02-24T16:10:53Z|00144|binding|INFO|Releasing lport f6294507-2873-4400-bee9-b9ef626c0371 from this chassis (sb_readonly=0)
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.129 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.131 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1a08fe3b-f918-47ba-ae63-ee103c7afcba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1a08fe3b-f918-47ba-ae63-ee103c7afcba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.132 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[66893efb-84af-4a84-81c2-4b92151ffcf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.133 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: global
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-1a08fe3b-f918-47ba-ae63-ee103c7afcba
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/1a08fe3b-f918-47ba-ae63-ee103c7afcba.pid.haproxy
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID 1a08fe3b-f918-47ba-ae63-ee103c7afcba
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 16:10:53 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:53.133 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba', 'env', 'PROCESS_TAG=haproxy-1a08fe3b-f918-47ba-ae63-ee103c7afcba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1a08fe3b-f918-47ba-ae63-ee103c7afcba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.250 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949453.2498603, 511a6d08-b421-4fcb-bb1a-13d6ee450a2d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.250 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] VM Started (Lifecycle Event)
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.321 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.332 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949453.2499588, 511a6d08-b421-4fcb-bb1a-13d6ee450a2d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.332 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] VM Paused (Lifecycle Event)
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.360 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.369 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.388 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.480 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949453.4795384, 4b3df01c-0b2d-42c4-90f7-59c995377765 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.480 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] VM Started (Lifecycle Event)
Feb 24 16:10:53 compute-0 podman[254437]: 2026-02-24 16:10:53.529399847 +0000 UTC m=+0.101681712 container create 904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.560 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:53 compute-0 podman[254437]: 2026-02-24 16:10:53.481199305 +0000 UTC m=+0.053481180 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 16:10:53 compute-0 systemd[1]: Started libpod-conmon-904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3.scope.
Feb 24 16:10:53 compute-0 systemd[1]: Started libcrun container.
Feb 24 16:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cb1c0f16a8755fbe168ae65a81ae403a7958d7d3d5d1212caacb194b08f57e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 16:10:53 compute-0 podman[254437]: 2026-02-24 16:10:53.645265341 +0000 UTC m=+0.217547216 container init 904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 24 16:10:53 compute-0 podman[254452]: 2026-02-24 16:10:53.645260441 +0000 UTC m=+0.078018469 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 24 16:10:53 compute-0 podman[254437]: 2026-02-24 16:10:53.65536195 +0000 UTC m=+0.227643795 container start 904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:10:53 compute-0 neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254472]: [NOTICE]   (254494) : New worker (254496) forked
Feb 24 16:10:53 compute-0 neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254472]: [NOTICE]   (254494) : Loading success.
Feb 24 16:10:53 compute-0 podman[254451]: 2026-02-24 16:10:53.677987155 +0000 UTC m=+0.113462818 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.738 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.744 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949453.4798331, 4b3df01c-0b2d-42c4-90f7-59c995377765 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.744 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] VM Paused (Lifecycle Event)
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.763 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.768 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.790 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.826 188707 DEBUG nova.network.neutron [req-f278cd84-84c0-4d65-bc7b-82eb14535e08 req-159d008b-a40e-48c9-8765-3095f6faab44 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Updated VIF entry in instance network info cache for port 308a4c02-df27-44f7-8630-7adbfdb9e316. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.827 188707 DEBUG nova.network.neutron [req-f278cd84-84c0-4d65-bc7b-82eb14535e08 req-159d008b-a40e-48c9-8765-3095f6faab44 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Updating instance_info_cache with network_info: [{"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:53 compute-0 nova_compute[188703]: 2026-02-24 16:10:53.849 188707 DEBUG oslo_concurrency.lockutils [req-f278cd84-84c0-4d65-bc7b-82eb14535e08 req-159d008b-a40e-48c9-8765-3095f6faab44 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-511a6d08-b421-4fcb-bb1a-13d6ee450a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.045 188707 DEBUG nova.compute.manager [req-f06daa56-74cc-4961-ba02-7cfbac8efc5b req-b5d5218a-e2e3-4dfc-b29f-518e0c477f82 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received event network-vif-plugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.046 188707 DEBUG oslo_concurrency.lockutils [req-f06daa56-74cc-4961-ba02-7cfbac8efc5b req-b5d5218a-e2e3-4dfc-b29f-518e0c477f82 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.046 188707 DEBUG oslo_concurrency.lockutils [req-f06daa56-74cc-4961-ba02-7cfbac8efc5b req-b5d5218a-e2e3-4dfc-b29f-518e0c477f82 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.046 188707 DEBUG oslo_concurrency.lockutils [req-f06daa56-74cc-4961-ba02-7cfbac8efc5b req-b5d5218a-e2e3-4dfc-b29f-518e0c477f82 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.046 188707 DEBUG nova.compute.manager [req-f06daa56-74cc-4961-ba02-7cfbac8efc5b req-b5d5218a-e2e3-4dfc-b29f-518e0c477f82 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Processing event network-vif-plugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.047 188707 DEBUG nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.054 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949455.0538535, 4b3df01c-0b2d-42c4-90f7-59c995377765 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.054 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] VM Resumed (Lifecycle Event)
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.056 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.066 188707 INFO nova.virt.libvirt.driver [-] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Instance spawned successfully.
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.066 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.081 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.092 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.098 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.099 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.099 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.100 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.100 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.101 188707 DEBUG nova.virt.libvirt.driver [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.109 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.171 188707 INFO nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Took 7.65 seconds to spawn the instance on the hypervisor.
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.172 188707 DEBUG nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.237 188707 INFO nova.compute.manager [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Took 8.38 seconds to build instance.
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.253 188707 DEBUG oslo_concurrency.lockutils [None req-3d962ebe-da33-437a-b3bf-1659296da39a 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:55.738 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:55.739 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:55.739 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.957 188707 DEBUG nova.compute.manager [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received event network-vif-plugged-308a4c02-df27-44f7-8630-7adbfdb9e316 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.958 188707 DEBUG oslo_concurrency.lockutils [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.959 188707 DEBUG oslo_concurrency.lockutils [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.960 188707 DEBUG oslo_concurrency.lockutils [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.960 188707 DEBUG nova.compute.manager [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Processing event network-vif-plugged-308a4c02-df27-44f7-8630-7adbfdb9e316 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.961 188707 DEBUG nova.compute.manager [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received event network-vif-plugged-308a4c02-df27-44f7-8630-7adbfdb9e316 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.962 188707 DEBUG oslo_concurrency.lockutils [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.963 188707 DEBUG oslo_concurrency.lockutils [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.963 188707 DEBUG oslo_concurrency.lockutils [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.964 188707 DEBUG nova.compute.manager [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] No waiting events found dispatching network-vif-plugged-308a4c02-df27-44f7-8630-7adbfdb9e316 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.965 188707 WARNING nova.compute.manager [req-1768042d-e748-4305-9fff-8d92bdf4c1a3 req-e1dde094-2cd0-4a82-b4fc-eeaec31bf437 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received unexpected event network-vif-plugged-308a4c02-df27-44f7-8630-7adbfdb9e316 for instance with vm_state building and task_state spawning.
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.970 188707 DEBUG nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.976 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949455.97578, 511a6d08-b421-4fcb-bb1a-13d6ee450a2d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.977 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] VM Resumed (Lifecycle Event)
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.981 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771949440.980526, 9782c01d-ae8a-45f3-8949-f89d691eba6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.981 188707 INFO nova.compute.manager [-] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] VM Stopped (Lifecycle Event)
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.982 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.992 188707 INFO nova.virt.libvirt.driver [-] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Instance spawned successfully.
Feb 24 16:10:55 compute-0 nova_compute[188703]: 2026-02-24 16:10:55.992 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.015 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.017 188707 DEBUG nova.compute.manager [None req-63144eba-f89c-4992-bd2c-f76986311f33 - - - - - -] [instance: 9782c01d-ae8a-45f3-8949-f89d691eba6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.023 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.028 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.029 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.029 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.029 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.030 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.030 188707 DEBUG nova.virt.libvirt.driver [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.057 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.099 188707 INFO nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Took 9.24 seconds to spawn the instance on the hypervisor.
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.099 188707 DEBUG nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.163 188707 INFO nova.compute.manager [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Took 9.79 seconds to build instance.
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.182 188707 DEBUG oslo_concurrency.lockutils [None req-8272ebe7-9722-4918-beb9-80ae62b7db4b 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.838 188707 DEBUG nova.network.neutron [req-33113cd3-58a8-4e67-8520-b3b3c815352d req-43b80b6f-47d9-4976-9f77-6a5e1c0027f3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Updated VIF entry in instance network info cache for port 540b66b9-f088-4e4e-bc6a-2c20cee24320. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.839 188707 DEBUG nova.network.neutron [req-33113cd3-58a8-4e67-8520-b3b3c815352d req-43b80b6f-47d9-4976-9f77-6a5e1c0027f3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Updating instance_info_cache with network_info: [{"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.858 188707 DEBUG oslo_concurrency.lockutils [req-33113cd3-58a8-4e67-8520-b3b3c815352d req-43b80b6f-47d9-4976-9f77-6a5e1c0027f3 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-4b3df01c-0b2d-42c4-90f7-59c995377765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.997 188707 DEBUG oslo_concurrency.lockutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.998 188707 DEBUG oslo_concurrency.lockutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:56 compute-0 nova_compute[188703]: 2026-02-24 16:10:56.998 188707 INFO nova.compute.manager [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Rebooting instance
Feb 24 16:10:57 compute-0 nova_compute[188703]: 2026-02-24 16:10:57.018 188707 DEBUG oslo_concurrency.lockutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:10:57 compute-0 nova_compute[188703]: 2026-02-24 16:10:57.018 188707 DEBUG oslo_concurrency.lockutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquired lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:10:57 compute-0 nova_compute[188703]: 2026-02-24 16:10:57.018 188707 DEBUG nova.network.neutron [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:10:57 compute-0 nova_compute[188703]: 2026-02-24 16:10:57.088 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:58 compute-0 nova_compute[188703]: 2026-02-24 16:10:58.314 188707 DEBUG nova.compute.manager [req-a2be20d6-f48b-4a6b-9f5b-7da0a6ca2d74 req-f7a42675-a6d3-4122-bfe4-fc7ef44119d7 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received event network-vif-plugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:10:58 compute-0 nova_compute[188703]: 2026-02-24 16:10:58.314 188707 DEBUG oslo_concurrency.lockutils [req-a2be20d6-f48b-4a6b-9f5b-7da0a6ca2d74 req-f7a42675-a6d3-4122-bfe4-fc7ef44119d7 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:10:58 compute-0 nova_compute[188703]: 2026-02-24 16:10:58.315 188707 DEBUG oslo_concurrency.lockutils [req-a2be20d6-f48b-4a6b-9f5b-7da0a6ca2d74 req-f7a42675-a6d3-4122-bfe4-fc7ef44119d7 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:10:58 compute-0 nova_compute[188703]: 2026-02-24 16:10:58.315 188707 DEBUG oslo_concurrency.lockutils [req-a2be20d6-f48b-4a6b-9f5b-7da0a6ca2d74 req-f7a42675-a6d3-4122-bfe4-fc7ef44119d7 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:10:58 compute-0 nova_compute[188703]: 2026-02-24 16:10:58.316 188707 DEBUG nova.compute.manager [req-a2be20d6-f48b-4a6b-9f5b-7da0a6ca2d74 req-f7a42675-a6d3-4122-bfe4-fc7ef44119d7 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] No waiting events found dispatching network-vif-plugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:10:58 compute-0 nova_compute[188703]: 2026-02-24 16:10:58.316 188707 WARNING nova.compute.manager [req-a2be20d6-f48b-4a6b-9f5b-7da0a6ca2d74 req-f7a42675-a6d3-4122-bfe4-fc7ef44119d7 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received unexpected event network-vif-plugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 for instance with vm_state active and task_state None.
Feb 24 16:10:58 compute-0 nova_compute[188703]: 2026-02-24 16:10:58.563 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:59 compute-0 podman[254506]: 2026-02-24 16:10:59.104891259 +0000 UTC m=+0.062343124 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.521 188707 DEBUG nova.network.neutron [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Updating instance_info_cache with network_info: [{"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.541 188707 DEBUG oslo_concurrency.lockutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Releasing lock "refresh_cache-e365caeb-efd7-437b-aa10-e579f7c99f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.542 188707 DEBUG nova.compute.manager [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:10:59 compute-0 kernel: tap1c040558-99 (unregistering): left promiscuous mode
Feb 24 16:10:59 compute-0 NetworkManager[56995]: <info>  [1771949459.7017] device (tap1c040558-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:10:59 compute-0 ovn_controller[98701]: 2026-02-24T16:10:59Z|00145|binding|INFO|Releasing lport 1c040558-99c8-40bd-8b21-1337faca7edc from this chassis (sb_readonly=0)
Feb 24 16:10:59 compute-0 ovn_controller[98701]: 2026-02-24T16:10:59Z|00146|binding|INFO|Setting lport 1c040558-99c8-40bd-8b21-1337faca7edc down in Southbound
Feb 24 16:10:59 compute-0 ovn_controller[98701]: 2026-02-24T16:10:59Z|00147|binding|INFO|Removing iface tap1c040558-99 ovn-installed in OVS
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.714 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.716 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:59.720 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:c9:69 10.100.0.10'], port_security=['fa:16:3e:30:c9:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e365caeb-efd7-437b-aa10-e579f7c99f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b63b7c206004c42b699bdc42c129b6b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '302c0bad-634d-4905-abc7-a5c548d119ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92707d4c-a464-49d7-8f37-7fa0e55d12a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=1c040558-99c8-40bd-8b21-1337faca7edc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:10:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:59.724 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 1c040558-99c8-40bd-8b21-1337faca7edc in datapath 617264bd-8d71-44c7-9bb9-ef21a37be5eb unbound from our chassis
Feb 24 16:10:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:59.729 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 617264bd-8d71-44c7-9bb9-ef21a37be5eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.729 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:59.731 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[19637c90-82d1-4e2f-bd49-c5e3e63824f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:10:59 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:10:59.731 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb namespace which is not needed anymore
Feb 24 16:10:59 compute-0 podman[204685]: time="2026-02-24T16:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:10:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32936 "" "Go-http-client/1.1"
Feb 24 16:10:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5776 "" "Go-http-client/1.1"
Feb 24 16:10:59 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Feb 24 16:10:59 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 39.849s CPU time.
Feb 24 16:10:59 compute-0 systemd-machined[158049]: Machine qemu-9-instance-00000009 terminated.
Feb 24 16:10:59 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[253251]: [NOTICE]   (253255) : haproxy version is 2.8.14-c23fe91
Feb 24 16:10:59 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[253251]: [NOTICE]   (253255) : path to executable is /usr/sbin/haproxy
Feb 24 16:10:59 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[253251]: [WARNING]  (253255) : Exiting Master process...
Feb 24 16:10:59 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[253251]: [ALERT]    (253255) : Current worker (253257) exited with code 143 (Terminated)
Feb 24 16:10:59 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[253251]: [WARNING]  (253255) : All workers exited. Exiting... (0)
Feb 24 16:10:59 compute-0 systemd[1]: libpod-ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f.scope: Deactivated successfully.
Feb 24 16:10:59 compute-0 podman[254546]: 2026-02-24 16:10:59.913235347 +0000 UTC m=+0.072070243 container died ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.919 188707 INFO nova.virt.libvirt.driver [-] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Instance destroyed successfully.
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.920 188707 DEBUG nova.objects.instance [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lazy-loading 'resources' on Instance uuid e365caeb-efd7-437b-aa10-e579f7c99f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.937 188707 DEBUG nova.virt.libvirt.vif [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1465534534',display_name='tempest-ServerActionsTestJSON-server-1465534534',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1465534534',id=9,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZyYKPc4V6M0wvJAG9T+xK5LUSQ9T3A/higDLxTyiLxe53PGIkxY4Fvqb7KzGKM0zXSbG9tTOZZ45MmiyEiALztFvtXt0JRIVYKiHXk5B1tyWpIojmBc9p6KFCMGGybeQ==',key_name='tempest-keypair-2037101796',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:09:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b63b7c206004c42b699bdc42c129b6b',ramdisk_id='',reservation_id='r-5riwlf7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1577843196',owner_user_name='tempest-ServerActionsTestJSON-1577843196-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:10:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e3fc739339a5496cb9c0e2e0eebefd55',uuid=e365caeb-efd7-437b-aa10-e579f7c99f2b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.938 188707 DEBUG nova.network.os_vif_util [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converting VIF {"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.939 188707 DEBUG nova.network.os_vif_util [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.939 188707 DEBUG os_vif [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.941 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.941 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c040558-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.944 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.946 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.948 188707 INFO os_vif [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99')
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.955 188707 DEBUG nova.virt.libvirt.driver [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Start _get_guest_xml network_info=[{"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f-userdata-shm.mount: Deactivated successfully.
Feb 24 16:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-54297fbc26350a977621c9e4a774aea4d962dd6868d7bdb3781fefd5d5ec894e-merged.mount: Deactivated successfully.
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.970 188707 WARNING nova.virt.libvirt.driver [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:10:59 compute-0 podman[254546]: 2026-02-24 16:10:59.974995874 +0000 UTC m=+0.133830750 container cleanup ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.976 188707 DEBUG nova.virt.libvirt.host [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.977 188707 DEBUG nova.virt.libvirt.host [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:10:59 compute-0 systemd[1]: libpod-conmon-ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f.scope: Deactivated successfully.
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.983 188707 DEBUG nova.virt.libvirt.host [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.984 188707 DEBUG nova.virt.libvirt.host [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.985 188707 DEBUG nova.virt.libvirt.driver [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.985 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ee41af80-6a60-4735-8135-3a06de2a36b2,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.986 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.987 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.987 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.988 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.989 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.989 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.990 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.990 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.991 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.991 188707 DEBUG nova.virt.hardware [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:10:59 compute-0 nova_compute[188703]: 2026-02-24 16:10:59.992 188707 DEBUG nova.objects.instance [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lazy-loading 'vcpu_model' on Instance uuid e365caeb-efd7-437b-aa10-e579f7c99f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:11:00 compute-0 podman[254588]: 2026-02-24 16:11:00.070948367 +0000 UTC m=+0.064184645 container remove ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:11:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:00.075 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[697e50d9-4bc6-4f66-a7b8-cf4a3ad70f6d]: (4, ('Tue Feb 24 04:10:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb (ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f)\nba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f\nTue Feb 24 04:10:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb (ba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f)\nba0817bdd66e9f083ae32943a84cad0189c197dbc69688288895bf3418f6944f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:00.077 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2564c947-e97b-4281-a57a-83c6927be875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:00.077 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap617264bd-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.079 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:00 compute-0 kernel: tap617264bd-80: left promiscuous mode
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.084 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.093 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:00.096 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[3633c818-c897-43c7-87d4-ae719779a97f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:00.112 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[b7355bb5-088c-47e3-b2ef-a0e263988c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:00.113 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2712b123-1d47-4949-adb4-5aa7229e6945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:00.128 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[3c442332-0201-4593-b83a-7aa7da31987d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509383, 'reachable_time': 36816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254601, 'error': None, 'target': 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:00.131 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:11:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:00.131 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3bc9b4-428a-4466-9e83-edd3cb46ac6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d617264bd\x2d8d71\x2d44c7\x2d9bb9\x2def21a37be5eb.mount: Deactivated successfully.
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.681 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.763 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.config --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.764 188707 DEBUG oslo_concurrency.lockutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.765 188707 DEBUG oslo_concurrency.lockutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.766 188707 DEBUG oslo_concurrency.lockutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.767 188707 DEBUG nova.virt.libvirt.vif [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1465534534',display_name='tempest-ServerActionsTestJSON-server-1465534534',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1465534534',id=9,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZyYKPc4V6M0wvJAG9T+xK5LUSQ9T3A/higDLxTyiLxe53PGIkxY4Fvqb7KzGKM0zXSbG9tTOZZ45MmiyEiALztFvtXt0JRIVYKiHXk5B1tyWpIojmBc9p6KFCMGGybeQ==',key_name='tempest-keypair-2037101796',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:09:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b63b7c206004c42b699bdc42c129b6b',ramdisk_id='',reservation_id='r-5riwlf7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1577843196',owner_user_name='tempest-ServerActionsTestJSON-1577843196-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:10:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e3fc739339a5496cb9c0e2e0eebefd55',uuid=e365caeb-efd7-437b-aa10-e579f7c99f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.768 188707 DEBUG nova.network.os_vif_util [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converting VIF {"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.769 188707 DEBUG nova.network.os_vif_util [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.771 188707 DEBUG nova.objects.instance [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lazy-loading 'pci_devices' on Instance uuid e365caeb-efd7-437b-aa10-e579f7c99f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.789 188707 DEBUG nova.virt.libvirt.driver [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <uuid>e365caeb-efd7-437b-aa10-e579f7c99f2b</uuid>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <name>instance-00000009</name>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <nova:name>tempest-ServerActionsTestJSON-server-1465534534</nova:name>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:10:59</nova:creationTime>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:11:00 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:11:00 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:11:00 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:11:00 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:11:00 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:11:00 compute-0 nova_compute[188703]:         <nova:user uuid="e3fc739339a5496cb9c0e2e0eebefd55">tempest-ServerActionsTestJSON-1577843196-project-member</nova:user>
Feb 24 16:11:00 compute-0 nova_compute[188703]:         <nova:project uuid="9b63b7c206004c42b699bdc42c129b6b">tempest-ServerActionsTestJSON-1577843196</nova:project>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="ee41af80-6a60-4735-8135-3a06de2a36b2"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:11:00 compute-0 nova_compute[188703]:         <nova:port uuid="1c040558-99c8-40bd-8b21-1337faca7edc">
Feb 24 16:11:00 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <system>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <entry name="serial">e365caeb-efd7-437b-aa10-e579f7c99f2b</entry>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <entry name="uuid">e365caeb-efd7-437b-aa10-e579f7c99f2b</entry>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     </system>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <os>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   </os>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <features>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   </features>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.config"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:30:c9:69"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <target dev="tap1c040558-99"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/console.log" append="off"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <video>
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     </video>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <input type="keyboard" bus="usb"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:11:00 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:11:00 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:11:00 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:11:00 compute-0 nova_compute[188703]: </domain>
Feb 24 16:11:00 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.797 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.849 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.850 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.906 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.909 188707 DEBUG nova.objects.instance [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lazy-loading 'trusted_certs' on Instance uuid e365caeb-efd7-437b-aa10-e579f7c99f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.922 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.978 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.979 188707 DEBUG nova.virt.disk.api [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Checking if we can resize image /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:11:00 compute-0 nova_compute[188703]: 2026-02-24 16:11:00.980 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.061 188707 DEBUG oslo_concurrency.processutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.062 188707 DEBUG nova.virt.disk.api [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Cannot resize image /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.063 188707 DEBUG nova.objects.instance [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lazy-loading 'migration_context' on Instance uuid e365caeb-efd7-437b-aa10-e579f7c99f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.080 188707 DEBUG nova.virt.libvirt.vif [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1465534534',display_name='tempest-ServerActionsTestJSON-server-1465534534',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1465534534',id=9,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZyYKPc4V6M0wvJAG9T+xK5LUSQ9T3A/higDLxTyiLxe53PGIkxY4Fvqb7KzGKM0zXSbG9tTOZZ45MmiyEiALztFvtXt0JRIVYKiHXk5B1tyWpIojmBc9p6KFCMGGybeQ==',key_name='tempest-keypair-2037101796',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:09:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='9b63b7c206004c42b699bdc42c129b6b',ramdisk_id='',reservation_id='r-5riwlf7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1577843196',owner_user_name='tempest-ServerActionsTestJSON-1577843196-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:10:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e3fc739339a5496cb9c0e2e0eebefd55',uuid=e365caeb-efd7-437b-aa10-e579f7c99f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.081 188707 DEBUG nova.network.os_vif_util [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converting VIF {"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.082 188707 DEBUG nova.network.os_vif_util [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.082 188707 DEBUG os_vif [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.083 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.084 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.084 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.086 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.087 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c040558-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.087 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c040558-99, col_values=(('external_ids', {'iface-id': '1c040558-99c8-40bd-8b21-1337faca7edc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:c9:69', 'vm-uuid': 'e365caeb-efd7-437b-aa10-e579f7c99f2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.090 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 NetworkManager[56995]: <info>  [1771949461.0924] manager: (tap1c040558-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.094 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.097 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.098 188707 INFO os_vif [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99')
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.164 188707 DEBUG nova.compute.manager [req-4a9f4768-766e-441e-aba4-2e918795190f req-64a67fb5-a1fd-46f4-bc8d-2a26e123cc25 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received event network-vif-unplugged-1c040558-99c8-40bd-8b21-1337faca7edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.165 188707 DEBUG oslo_concurrency.lockutils [req-4a9f4768-766e-441e-aba4-2e918795190f req-64a67fb5-a1fd-46f4-bc8d-2a26e123cc25 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.165 188707 DEBUG oslo_concurrency.lockutils [req-4a9f4768-766e-441e-aba4-2e918795190f req-64a67fb5-a1fd-46f4-bc8d-2a26e123cc25 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.165 188707 DEBUG oslo_concurrency.lockutils [req-4a9f4768-766e-441e-aba4-2e918795190f req-64a67fb5-a1fd-46f4-bc8d-2a26e123cc25 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.166 188707 DEBUG nova.compute.manager [req-4a9f4768-766e-441e-aba4-2e918795190f req-64a67fb5-a1fd-46f4-bc8d-2a26e123cc25 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] No waiting events found dispatching network-vif-unplugged-1c040558-99c8-40bd-8b21-1337faca7edc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.166 188707 WARNING nova.compute.manager [req-4a9f4768-766e-441e-aba4-2e918795190f req-64a67fb5-a1fd-46f4-bc8d-2a26e123cc25 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received unexpected event network-vif-unplugged-1c040558-99c8-40bd-8b21-1337faca7edc for instance with vm_state active and task_state reboot_started_hard.
Feb 24 16:11:01 compute-0 kernel: tap1c040558-99: entered promiscuous mode
Feb 24 16:11:01 compute-0 systemd-udevd[254529]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:11:01 compute-0 NetworkManager[56995]: <info>  [1771949461.1743] manager: (tap1c040558-99): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Feb 24 16:11:01 compute-0 ovn_controller[98701]: 2026-02-24T16:11:01Z|00148|binding|INFO|Claiming lport 1c040558-99c8-40bd-8b21-1337faca7edc for this chassis.
Feb 24 16:11:01 compute-0 ovn_controller[98701]: 2026-02-24T16:11:01Z|00149|binding|INFO|1c040558-99c8-40bd-8b21-1337faca7edc: Claiming fa:16:3e:30:c9:69 10.100.0.10
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.179 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 NetworkManager[56995]: <info>  [1771949461.1854] device (tap1c040558-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:11:01 compute-0 NetworkManager[56995]: <info>  [1771949461.1872] device (tap1c040558-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.187 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:c9:69 10.100.0.10'], port_security=['fa:16:3e:30:c9:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e365caeb-efd7-437b-aa10-e579f7c99f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b63b7c206004c42b699bdc42c129b6b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '302c0bad-634d-4905-abc7-a5c548d119ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92707d4c-a464-49d7-8f37-7fa0e55d12a7, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=1c040558-99c8-40bd-8b21-1337faca7edc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:11:01 compute-0 ovn_controller[98701]: 2026-02-24T16:11:01Z|00150|binding|INFO|Setting lport 1c040558-99c8-40bd-8b21-1337faca7edc ovn-installed in OVS
Feb 24 16:11:01 compute-0 ovn_controller[98701]: 2026-02-24T16:11:01Z|00151|binding|INFO|Setting lport 1c040558-99c8-40bd-8b21-1337faca7edc up in Southbound
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.188 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 1c040558-99c8-40bd-8b21-1337faca7edc in datapath 617264bd-8d71-44c7-9bb9-ef21a37be5eb bound to our chassis
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.194 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 617264bd-8d71-44c7-9bb9-ef21a37be5eb
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.194 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.197 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.205 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2836af54-6bf6-42ec-909c-bc31271053fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.206 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap617264bd-81 in ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.209 242109 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap617264bd-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.209 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2aef2c0a-e96d-49ef-a5a4-3bad40596a20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.210 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[60863c97-cc45-4ccf-a34d-2c132e87d004]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 systemd-machined[158049]: New machine qemu-15-instance-00000009.
Feb 24 16:11:01 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-00000009.
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.222 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[cb14beb2-5449-440c-9c1e-b80d3c9dd229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.252 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[977f3fd5-8c72-494a-b53c-856edb511d9b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.287 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[28405d28-5ca0-425e-9a81-0b7c1e3d2e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 NetworkManager[56995]: <info>  [1771949461.2999] manager: (tap617264bd-80): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.300 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc60a81-78e6-40d9-a4c1-c3b006663ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.335 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcfb30e-732a-4b29-a13f-8c648916cf3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.339 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[86313644-ee55-40ab-b95e-39a349cce1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 NetworkManager[56995]: <info>  [1771949461.3602] device (tap617264bd-80): carrier: link connected
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.365 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[23eea9ec-5611-4627-b998-ca5c1cc5fd79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.380 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[17ba3f30-ce16-4d9d-8234-e9b9082270b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap617264bd-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:49:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517386, 'reachable_time': 19636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254664, 'error': None, 'target': 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.394 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2e8f99-805e-425f-9236-cc9f52987ab7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:49bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517386, 'tstamp': 517386}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254665, 'error': None, 'target': 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.408 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[624377b8-20cb-4fcc-bff1-5d8a22ec3992]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap617264bd-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:49:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517386, 'reachable_time': 19636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254666, 'error': None, 'target': 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 openstack_network_exporter[207830]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:11:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:11:01 compute-0 openstack_network_exporter[207830]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:11:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.452 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[fc53bef0-a7c8-4eac-9808-a9052d0c9fe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.505 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[79fa85e0-3c20-4194-ad9e-b10f24dd012d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.506 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap617264bd-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.507 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.509 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap617264bd-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:01 compute-0 NetworkManager[56995]: <info>  [1771949461.5160] manager: (tap617264bd-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Feb 24 16:11:01 compute-0 kernel: tap617264bd-80: entered promiscuous mode
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.517 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.521 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap617264bd-80, col_values=(('external_ids', {'iface-id': 'd50b9c1e-a71e-49f6-a0bc-95207c7d9dc7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.522 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 ovn_controller[98701]: 2026-02-24T16:11:01Z|00152|binding|INFO|Releasing lport d50b9c1e-a71e-49f6-a0bc-95207c7d9dc7 from this chassis (sb_readonly=0)
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.523 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.526 108026 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/617264bd-8d71-44c7-9bb9-ef21a37be5eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/617264bd-8d71-44c7-9bb9-ef21a37be5eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.528 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.527 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b0001b-a564-468d-9f98-a45a5aeb0400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.529 108026 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: global
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     log         /dev/log local0 debug
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     log-tag     haproxy-metadata-proxy-617264bd-8d71-44c7-9bb9-ef21a37be5eb
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     user        root
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     group       root
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     maxconn     1024
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     pidfile     /var/lib/neutron/external/pids/617264bd-8d71-44c7-9bb9-ef21a37be5eb.pid.haproxy
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     daemon
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: defaults
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     log global
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     mode http
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     option httplog
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     option dontlognull
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     option http-server-close
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     option forwardfor
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     retries                 3
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     timeout http-request    30s
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     timeout connect         30s
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     timeout client          32s
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     timeout server          32s
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     timeout http-keep-alive 30s
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: listen listener
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     bind 169.254.169.254:80
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     server metadata /var/lib/neutron/metadata_proxy
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:     http-request add-header X-OVN-Network-ID 617264bd-8d71-44c7-9bb9-ef21a37be5eb
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 24 16:11:01 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:01.530 108026 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'env', 'PROCESS_TAG=haproxy-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/617264bd-8d71-44c7-9bb9-ef21a37be5eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.619 188707 DEBUG nova.virt.libvirt.host [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Removed pending event for e365caeb-efd7-437b-aa10-e579f7c99f2b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.620 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949461.618885, e365caeb-efd7-437b-aa10-e579f7c99f2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.620 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] VM Resumed (Lifecycle Event)
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.626 188707 DEBUG nova.compute.manager [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.630 188707 INFO nova.virt.libvirt.driver [-] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Instance rebooted successfully.
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.630 188707 DEBUG nova.compute.manager [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.646 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.655 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.685 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.686 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949461.6242094, e365caeb-efd7-437b-aa10-e579f7c99f2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.686 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] VM Started (Lifecycle Event)
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.702 188707 DEBUG oslo_concurrency.lockutils [None req-d3f39793-28a3-4240-9445-9cbbf80d6f9b e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.708 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:11:01 compute-0 nova_compute[188703]: 2026-02-24 16:11:01.712 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:11:01 compute-0 podman[254702]: 2026-02-24 16:11:01.962726458 +0000 UTC m=+0.095174172 container create cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 24 16:11:02 compute-0 systemd[1]: Started libpod-conmon-cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1.scope.
Feb 24 16:11:02 compute-0 anacron[30515]: Job `cron.monthly' started
Feb 24 16:11:02 compute-0 podman[254702]: 2026-02-24 16:11:01.916318555 +0000 UTC m=+0.048766169 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 24 16:11:02 compute-0 anacron[30515]: Job `cron.monthly' terminated
Feb 24 16:11:02 compute-0 anacron[30515]: Normal exit (3 jobs run)
Feb 24 16:11:02 compute-0 systemd[1]: Started libcrun container.
Feb 24 16:11:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06060071211af2dca8e3b97cc4ce3564d07c9558231166af11ca4478967ae66d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 24 16:11:02 compute-0 podman[254702]: 2026-02-24 16:11:02.097662299 +0000 UTC m=+0.230109943 container init cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:11:02 compute-0 podman[254702]: 2026-02-24 16:11:02.108856559 +0000 UTC m=+0.241304173 container start cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 24 16:11:02 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[254717]: [NOTICE]   (254722) : New worker (254724) forked
Feb 24 16:11:02 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[254717]: [NOTICE]   (254722) : Loading success.
Feb 24 16:11:02 compute-0 nova_compute[188703]: 2026-02-24 16:11:02.369 188707 DEBUG nova.compute.manager [req-2a2ec36f-c486-4cd4-ab7b-2015f614ec81 req-945cfffc-a0b4-4e0b-94d2-535a214b19ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received event network-changed-308a4c02-df27-44f7-8630-7adbfdb9e316 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:02 compute-0 nova_compute[188703]: 2026-02-24 16:11:02.370 188707 DEBUG nova.compute.manager [req-2a2ec36f-c486-4cd4-ab7b-2015f614ec81 req-945cfffc-a0b4-4e0b-94d2-535a214b19ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Refreshing instance network info cache due to event network-changed-308a4c02-df27-44f7-8630-7adbfdb9e316. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:11:02 compute-0 nova_compute[188703]: 2026-02-24 16:11:02.370 188707 DEBUG oslo_concurrency.lockutils [req-2a2ec36f-c486-4cd4-ab7b-2015f614ec81 req-945cfffc-a0b4-4e0b-94d2-535a214b19ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-511a6d08-b421-4fcb-bb1a-13d6ee450a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:11:02 compute-0 nova_compute[188703]: 2026-02-24 16:11:02.370 188707 DEBUG oslo_concurrency.lockutils [req-2a2ec36f-c486-4cd4-ab7b-2015f614ec81 req-945cfffc-a0b4-4e0b-94d2-535a214b19ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-511a6d08-b421-4fcb-bb1a-13d6ee450a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:11:02 compute-0 nova_compute[188703]: 2026-02-24 16:11:02.370 188707 DEBUG nova.network.neutron [req-2a2ec36f-c486-4cd4-ab7b-2015f614ec81 req-945cfffc-a0b4-4e0b-94d2-535a214b19ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Refreshing network info cache for port 308a4c02-df27-44f7-8630-7adbfdb9e316 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:11:03 compute-0 nova_compute[188703]: 2026-02-24 16:11:03.310 188707 DEBUG nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received event network-changed-540b66b9-f088-4e4e-bc6a-2c20cee24320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:03 compute-0 nova_compute[188703]: 2026-02-24 16:11:03.310 188707 DEBUG nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Refreshing instance network info cache due to event network-changed-540b66b9-f088-4e4e-bc6a-2c20cee24320. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:11:03 compute-0 nova_compute[188703]: 2026-02-24 16:11:03.310 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-4b3df01c-0b2d-42c4-90f7-59c995377765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:11:03 compute-0 nova_compute[188703]: 2026-02-24 16:11:03.310 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-4b3df01c-0b2d-42c4-90f7-59c995377765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:11:03 compute-0 nova_compute[188703]: 2026-02-24 16:11:03.310 188707 DEBUG nova.network.neutron [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Refreshing network info cache for port 540b66b9-f088-4e4e-bc6a-2c20cee24320 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:11:03 compute-0 nova_compute[188703]: 2026-02-24 16:11:03.565 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.087 188707 DEBUG nova.network.neutron [req-2a2ec36f-c486-4cd4-ab7b-2015f614ec81 req-945cfffc-a0b4-4e0b-94d2-535a214b19ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Updated VIF entry in instance network info cache for port 308a4c02-df27-44f7-8630-7adbfdb9e316. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.088 188707 DEBUG nova.network.neutron [req-2a2ec36f-c486-4cd4-ab7b-2015f614ec81 req-945cfffc-a0b4-4e0b-94d2-535a214b19ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Updating instance_info_cache with network_info: [{"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.151 188707 DEBUG oslo_concurrency.lockutils [req-2a2ec36f-c486-4cd4-ab7b-2015f614ec81 req-945cfffc-a0b4-4e0b-94d2-535a214b19ba 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-511a6d08-b421-4fcb-bb1a-13d6ee450a2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.764 188707 DEBUG nova.network.neutron [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Updated VIF entry in instance network info cache for port 540b66b9-f088-4e4e-bc6a-2c20cee24320. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.765 188707 DEBUG nova.network.neutron [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Updating instance_info_cache with network_info: [{"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.789 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-4b3df01c-0b2d-42c4-90f7-59c995377765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.790 188707 DEBUG nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.790 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.791 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.792 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.792 188707 DEBUG nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] No waiting events found dispatching network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.793 188707 WARNING nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received unexpected event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc for instance with vm_state active and task_state None.
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.793 188707 DEBUG nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.794 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.794 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.795 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.795 188707 DEBUG nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] No waiting events found dispatching network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.796 188707 WARNING nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received unexpected event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc for instance with vm_state active and task_state None.
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.796 188707 DEBUG nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.797 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.797 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.798 188707 DEBUG oslo_concurrency.lockutils [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.798 188707 DEBUG nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] No waiting events found dispatching network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:11:05 compute-0 nova_compute[188703]: 2026-02-24 16:11:05.799 188707 WARNING nova.compute.manager [req-8858543b-f867-480f-bdf0-4091235544cf req-528ee08f-c33d-4acd-b6c2-52296421ce70 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received unexpected event network-vif-plugged-1c040558-99c8-40bd-8b21-1337faca7edc for instance with vm_state active and task_state None.
Feb 24 16:11:06 compute-0 nova_compute[188703]: 2026-02-24 16:11:06.091 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:06 compute-0 podman[254735]: 2026-02-24 16:11:06.170373544 +0000 UTC m=+0.128721880 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:11:06 compute-0 podman[254734]: 2026-02-24 16:11:06.18252262 +0000 UTC m=+0.125850310 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 16:11:07 compute-0 nova_compute[188703]: 2026-02-24 16:11:07.294 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:08 compute-0 nova_compute[188703]: 2026-02-24 16:11:08.567 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:08 compute-0 nova_compute[188703]: 2026-02-24 16:11:08.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:09 compute-0 nova_compute[188703]: 2026-02-24 16:11:09.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:09 compute-0 nova_compute[188703]: 2026-02-24 16:11:09.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:11:10 compute-0 nova_compute[188703]: 2026-02-24 16:11:10.574 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:11:10 compute-0 nova_compute[188703]: 2026-02-24 16:11:10.575 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:11:10 compute-0 nova_compute[188703]: 2026-02-24 16:11:10.576 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:11:11 compute-0 nova_compute[188703]: 2026-02-24 16:11:11.094 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:11 compute-0 podman[254776]: 2026-02-24 16:11:11.13284759 +0000 UTC m=+0.091956804 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.849 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updating instance_info_cache with network_info: [{"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.870 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-fc3a62d6-b05f-4032-a883-8c231d29ff29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.871 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.873 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.874 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.875 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.876 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.945 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.946 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:12 compute-0 nova_compute[188703]: 2026-02-24 16:11:12.974 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:13 compute-0 nova_compute[188703]: 2026-02-24 16:11:13.309 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:13 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:13.310 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:11:13 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:13.312 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:11:13 compute-0 nova_compute[188703]: 2026-02-24 16:11:13.569 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:16 compute-0 nova_compute[188703]: 2026-02-24 16:11:16.101 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:16 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:16.315 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:16 compute-0 nova_compute[188703]: 2026-02-24 16:11:16.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:11:16 compute-0 nova_compute[188703]: 2026-02-24 16:11:16.986 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:16 compute-0 nova_compute[188703]: 2026-02-24 16:11:16.987 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:16 compute-0 nova_compute[188703]: 2026-02-24 16:11:16.987 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:16 compute-0 nova_compute[188703]: 2026-02-24 16:11:16.988 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.097 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.151 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.153 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.203 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.214 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.268 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.270 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.320 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.327 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.389 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.390 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.444 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.453 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.515 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.516 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.576 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.584 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.635 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.636 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:11:17 compute-0 nova_compute[188703]: 2026-02-24 16:11:17.715 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.137 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.138 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4582MB free_disk=72.06926727294922GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.138 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.139 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.268 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance e365caeb-efd7-437b-aa10-e579f7c99f2b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.268 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance fc3a62d6-b05f-4032-a883-8c231d29ff29 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.268 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.269 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 511a6d08-b421-4fcb-bb1a-13d6ee450a2d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.269 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 4b3df01c-0b2d-42c4-90f7-59c995377765 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.269 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.269 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=79GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.287 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.309 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.309 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.326 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.351 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.465 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.482 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.513 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.513 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:18 compute-0 nova_compute[188703]: 2026-02-24 16:11:18.572 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:20 compute-0 podman[254830]: 2026-02-24 16:11:20.143514632 +0000 UTC m=+0.104144230 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:11:20 compute-0 podman[254831]: 2026-02-24 16:11:20.166691763 +0000 UTC m=+0.120239965 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 16:11:21 compute-0 nova_compute[188703]: 2026-02-24 16:11:21.106 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:23 compute-0 nova_compute[188703]: 2026-02-24 16:11:23.573 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:24 compute-0 podman[254868]: 2026-02-24 16:11:24.124406479 +0000 UTC m=+0.088477767 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, vcs-type=git, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, config_id=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 24 16:11:24 compute-0 podman[254869]: 2026-02-24 16:11:24.148538317 +0000 UTC m=+0.105593651 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi)
Feb 24 16:11:26 compute-0 nova_compute[188703]: 2026-02-24 16:11:26.110 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:28 compute-0 ovn_controller[98701]: 2026-02-24T16:11:28Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:46:22 10.100.0.12
Feb 24 16:11:28 compute-0 ovn_controller[98701]: 2026-02-24T16:11:28Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:46:22 10.100.0.12
Feb 24 16:11:28 compute-0 nova_compute[188703]: 2026-02-24 16:11:28.577 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:29 compute-0 podman[204685]: time="2026-02-24T16:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:11:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32936 "" "Go-http-client/1.1"
Feb 24 16:11:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5766 "" "Go-http-client/1.1"
Feb 24 16:11:29 compute-0 ovn_controller[98701]: 2026-02-24T16:11:29Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:bb:a8 10.100.0.7
Feb 24 16:11:29 compute-0 ovn_controller[98701]: 2026-02-24T16:11:29Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:bb:a8 10.100.0.7
Feb 24 16:11:30 compute-0 podman[254947]: 2026-02-24 16:11:30.147736693 +0000 UTC m=+0.098462573 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9/ubi-minimal, release=1770267347, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Feb 24 16:11:31 compute-0 nova_compute[188703]: 2026-02-24 16:11:31.114 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:31 compute-0 openstack_network_exporter[207830]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:11:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:11:31 compute-0 openstack_network_exporter[207830]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:11:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:11:33 compute-0 nova_compute[188703]: 2026-02-24 16:11:33.580 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:35 compute-0 ovn_controller[98701]: 2026-02-24T16:11:35Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:c9:69 10.100.0.10
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.516 188707 INFO nova.compute.manager [None req-8a66caa3-a3c4-4ca3-8f1f-72cfc6d2725a 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Get console output
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.522 241980 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.815 188707 DEBUG oslo_concurrency.lockutils [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.815 188707 DEBUG oslo_concurrency.lockutils [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.816 188707 DEBUG oslo_concurrency.lockutils [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.816 188707 DEBUG oslo_concurrency.lockutils [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.816 188707 DEBUG oslo_concurrency.lockutils [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.817 188707 INFO nova.compute.manager [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Terminating instance
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.818 188707 DEBUG nova.compute.manager [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:11:35 compute-0 kernel: tap308a4c02-df (unregistering): left promiscuous mode
Feb 24 16:11:35 compute-0 NetworkManager[56995]: <info>  [1771949495.8514] device (tap308a4c02-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:11:35 compute-0 ovn_controller[98701]: 2026-02-24T16:11:35Z|00153|binding|INFO|Releasing lport 308a4c02-df27-44f7-8630-7adbfdb9e316 from this chassis (sb_readonly=0)
Feb 24 16:11:35 compute-0 ovn_controller[98701]: 2026-02-24T16:11:35Z|00154|binding|INFO|Setting lport 308a4c02-df27-44f7-8630-7adbfdb9e316 down in Southbound
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.861 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:35 compute-0 ovn_controller[98701]: 2026-02-24T16:11:35Z|00155|binding|INFO|Removing iface tap308a4c02-df ovn-installed in OVS
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.870 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:35.877 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:bb:a8 10.100.0.7'], port_security=['fa:16:3e:fb:bb:a8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '511a6d08-b421-4fcb-bb1a-13d6ee450a2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d42735c7eb84888b6c3dca096466e04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd295f37d-3220-4591-855f-7d991af78faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2db2ff8a-782e-4e32-b2de-a44ea0ff97e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=308a4c02-df27-44f7-8630-7adbfdb9e316) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:11:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:35.881 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 308a4c02-df27-44f7-8630-7adbfdb9e316 in datapath aeadce2d-53c4-4727-bbc6-e1191df0ffea unbound from our chassis
Feb 24 16:11:35 compute-0 nova_compute[188703]: 2026-02-24 16:11:35.882 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:35 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 24 16:11:35 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 35.870s CPU time.
Feb 24 16:11:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:35.898 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aeadce2d-53c4-4727-bbc6-e1191df0ffea
Feb 24 16:11:35 compute-0 systemd-machined[158049]: Machine qemu-13-instance-0000000d terminated.
Feb 24 16:11:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:35.921 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[dc499717-e53d-4e0c-b251-b2d0b0482e12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:35.954 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[25e197c3-886f-4be6-88f8-95fe20a76fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:35.961 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[2019f3ce-e45b-4025-a79b-43a57be08b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:35 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:35.992 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[06ab6908-d5ef-486b-a149-16d51de55867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:36.010 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[4675d82e-43f5-4b81-af30-9b0e0d8d9fa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaeadce2d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:98:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509928, 'reachable_time': 34363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254996, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:36.023 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[e7fc7f3c-147f-4371-9061-d28930b8c537]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaeadce2d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509938, 'tstamp': 509938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254997, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaeadce2d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509940, 'tstamp': 509940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254997, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:36.025 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaeadce2d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.029 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.032 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:36.033 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaeadce2d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:36.034 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:11:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:36.035 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaeadce2d-50, col_values=(('external_ids', {'iface-id': 'e6d03cb3-ba09-4724-83d3-edb05289054b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:36 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:36.036 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.042 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.045 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.095 188707 INFO nova.virt.libvirt.driver [-] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Instance destroyed successfully.
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.096 188707 DEBUG nova.objects.instance [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lazy-loading 'resources' on Instance uuid 511a6d08-b421-4fcb-bb1a-13d6ee450a2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.116 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.129 188707 DEBUG nova.virt.libvirt.vif [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:10:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-364477853',display_name='tempest-TestNetworkBasicOps-server-364477853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-364477853',id=13,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLDhIGZ45cerZt16vXCE/2M+e0EvezYGSOKoMg7r1jolvwLCe7vqABOiTN3bC7vFpcsuQPOPQnsd5lAZ7mCFOwj9tIZ1SaXBoBDHrHg3VzhKyfqO/D4flGPVobB7p10FqQ==',key_name='tempest-TestNetworkBasicOps-1427864370',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d42735c7eb84888b6c3dca096466e04',ramdisk_id='',reservation_id='r-umhleyp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2112956786',owner_user_name='tempest-TestNetworkBasicOps-2112956786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:10:56Z,user_data=None,user_id='7cec00195bca4d15bbb0449e21faedcf',uuid=511a6d08-b421-4fcb-bb1a-13d6ee450a2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.130 188707 DEBUG nova.network.os_vif_util [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converting VIF {"id": "308a4c02-df27-44f7-8630-7adbfdb9e316", "address": "fa:16:3e:fb:bb:a8", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap308a4c02-df", "ovs_interfaceid": "308a4c02-df27-44f7-8630-7adbfdb9e316", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.132 188707 DEBUG nova.network.os_vif_util [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:bb:a8,bridge_name='br-int',has_traffic_filtering=True,id=308a4c02-df27-44f7-8630-7adbfdb9e316,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap308a4c02-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.133 188707 DEBUG os_vif [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:bb:a8,bridge_name='br-int',has_traffic_filtering=True,id=308a4c02-df27-44f7-8630-7adbfdb9e316,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap308a4c02-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.136 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.136 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap308a4c02-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.138 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.144 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.147 188707 INFO os_vif [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:bb:a8,bridge_name='br-int',has_traffic_filtering=True,id=308a4c02-df27-44f7-8630-7adbfdb9e316,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap308a4c02-df')
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.149 188707 INFO nova.virt.libvirt.driver [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Deleting instance files /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d_del
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.150 188707 INFO nova.virt.libvirt.driver [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Deletion of /var/lib/nova/instances/511a6d08-b421-4fcb-bb1a-13d6ee450a2d_del complete
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.225 188707 INFO nova.compute.manager [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Took 0.41 seconds to destroy the instance on the hypervisor.
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.225 188707 DEBUG oslo.service.loopingcall [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.226 188707 DEBUG nova.compute.manager [-] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.227 188707 DEBUG nova.network.neutron [-] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.234 188707 DEBUG nova.compute.manager [req-93a66143-6cbe-4f19-9034-2ed5a004c2f7 req-3f2813f6-5024-448b-82a3-be845e78c602 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received event network-vif-unplugged-308a4c02-df27-44f7-8630-7adbfdb9e316 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.234 188707 DEBUG oslo_concurrency.lockutils [req-93a66143-6cbe-4f19-9034-2ed5a004c2f7 req-3f2813f6-5024-448b-82a3-be845e78c602 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.235 188707 DEBUG oslo_concurrency.lockutils [req-93a66143-6cbe-4f19-9034-2ed5a004c2f7 req-3f2813f6-5024-448b-82a3-be845e78c602 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.236 188707 DEBUG oslo_concurrency.lockutils [req-93a66143-6cbe-4f19-9034-2ed5a004c2f7 req-3f2813f6-5024-448b-82a3-be845e78c602 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.236 188707 DEBUG nova.compute.manager [req-93a66143-6cbe-4f19-9034-2ed5a004c2f7 req-3f2813f6-5024-448b-82a3-be845e78c602 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] No waiting events found dispatching network-vif-unplugged-308a4c02-df27-44f7-8630-7adbfdb9e316 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:11:36 compute-0 nova_compute[188703]: 2026-02-24 16:11:36.237 188707 DEBUG nova.compute.manager [req-93a66143-6cbe-4f19-9034-2ed5a004c2f7 req-3f2813f6-5024-448b-82a3-be845e78c602 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received event network-vif-unplugged-308a4c02-df27-44f7-8630-7adbfdb9e316 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:11:37 compute-0 podman[255015]: 2026-02-24 16:11:37.139889451 +0000 UTC m=+0.097858957 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute)
Feb 24 16:11:37 compute-0 podman[255016]: 2026-02-24 16:11:37.178155318 +0000 UTC m=+0.129731876 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 24 16:11:37 compute-0 nova_compute[188703]: 2026-02-24 16:11:37.930 188707 DEBUG nova.network.neutron [-] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:11:37 compute-0 nova_compute[188703]: 2026-02-24 16:11:37.972 188707 INFO nova.compute.manager [-] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Took 1.75 seconds to deallocate network for instance.
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.017 188707 DEBUG nova.compute.manager [req-41a1b15d-28f5-4840-9b81-7bdd431a46a9 req-d9e720b1-6eac-4b37-a057-672580280b0f 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received event network-vif-deleted-308a4c02-df27-44f7-8630-7adbfdb9e316 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.023 188707 DEBUG oslo_concurrency.lockutils [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.023 188707 DEBUG oslo_concurrency.lockutils [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.176 188707 DEBUG nova.compute.provider_tree [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.196 188707 DEBUG nova.scheduler.client.report [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.238 188707 DEBUG oslo_concurrency.lockutils [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.275 188707 INFO nova.scheduler.client.report [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Deleted allocations for instance 511a6d08-b421-4fcb-bb1a-13d6ee450a2d
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.355 188707 DEBUG oslo_concurrency.lockutils [None req-a1b5e696-2d70-4456-a490-bbe096013593 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.408 188707 DEBUG nova.compute.manager [req-ed277a97-5331-4ef2-ad4a-42807b7f080b req-b25914fa-26ba-4c4d-a92d-33ba522f845b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received event network-vif-plugged-308a4c02-df27-44f7-8630-7adbfdb9e316 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.408 188707 DEBUG oslo_concurrency.lockutils [req-ed277a97-5331-4ef2-ad4a-42807b7f080b req-b25914fa-26ba-4c4d-a92d-33ba522f845b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.409 188707 DEBUG oslo_concurrency.lockutils [req-ed277a97-5331-4ef2-ad4a-42807b7f080b req-b25914fa-26ba-4c4d-a92d-33ba522f845b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.410 188707 DEBUG oslo_concurrency.lockutils [req-ed277a97-5331-4ef2-ad4a-42807b7f080b req-b25914fa-26ba-4c4d-a92d-33ba522f845b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "511a6d08-b421-4fcb-bb1a-13d6ee450a2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.412 188707 DEBUG nova.compute.manager [req-ed277a97-5331-4ef2-ad4a-42807b7f080b req-b25914fa-26ba-4c4d-a92d-33ba522f845b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] No waiting events found dispatching network-vif-plugged-308a4c02-df27-44f7-8630-7adbfdb9e316 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.413 188707 WARNING nova.compute.manager [req-ed277a97-5331-4ef2-ad4a-42807b7f080b req-b25914fa-26ba-4c4d-a92d-33ba522f845b 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Received unexpected event network-vif-plugged-308a4c02-df27-44f7-8630-7adbfdb9e316 for instance with vm_state deleted and task_state None.
Feb 24 16:11:38 compute-0 nova_compute[188703]: 2026-02-24 16:11:38.585 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.837 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.839 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece66f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.849 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e365caeb-efd7-437b-aa10-e579f7c99f2b from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 16:11:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:39.850 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e365caeb-efd7-437b-aa10-e579f7c99f2b -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 16:11:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:40.413 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1982 Content-Type: application/json Date: Tue, 24 Feb 2026 16:11:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-70e0a7ea-c79b-49d4-88d7-756c077290b2 x-openstack-request-id: req-70e0a7ea-c79b-49d4-88d7-756c077290b2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 16:11:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:40.414 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e365caeb-efd7-437b-aa10-e579f7c99f2b", "name": "tempest-ServerActionsTestJSON-server-1465534534", "status": "ACTIVE", "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "user_id": "e3fc739339a5496cb9c0e2e0eebefd55", "metadata": {}, "hostId": "a315daf2c81641c8a16f4e167f5acadd50cb3b71c04769e7eb833cfa", "image": {"id": "ee41af80-6a60-4735-8135-3a06de2a36b2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ee41af80-6a60-4735-8135-3a06de2a36b2"}]}, "flavor": {"id": "3303ac8b-27ad-4047-abf8-38e38cd23b6f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/3303ac8b-27ad-4047-abf8-38e38cd23b6f"}]}, "created": "2026-02-24T16:09:30Z", "updated": "2026-02-24T16:11:01Z", "addresses": {"tempest-ServerActionsTestJSON-1538167623-network": [{"version": 4, "addr": "10.100.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:30:c9:69"}, {"version": 4, "addr": "192.168.122.194", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:30:c9:69"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e365caeb-efd7-437b-aa10-e579f7c99f2b"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e365caeb-efd7-437b-aa10-e579f7c99f2b"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-2037101796", "OS-SRV-USG:launched_at": "2026-02-24T16:09:43.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1141040665"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 16:11:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:40.414 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e365caeb-efd7-437b-aa10-e579f7c99f2b used request id req-70e0a7ea-c79b-49d4-88d7-756c077290b2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 16:11:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:40.415 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e365caeb-efd7-437b-aa10-e579f7c99f2b', 'name': 'tempest-ServerActionsTestJSON-server-1465534534', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9b63b7c206004c42b699bdc42c129b6b', 'user_id': 'e3fc739339a5496cb9c0e2e0eebefd55', 'hostId': 'a315daf2c81641c8a16f4e167f5acadd50cb3b71c04769e7eb833cfa', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:11:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:40.418 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 16:11:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:40.419 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.486 188707 DEBUG oslo_concurrency.lockutils [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "fc3a62d6-b05f-4032-a883-8c231d29ff29" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.487 188707 DEBUG oslo_concurrency.lockutils [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.489 188707 DEBUG oslo_concurrency.lockutils [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.489 188707 DEBUG oslo_concurrency.lockutils [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.490 188707 DEBUG oslo_concurrency.lockutils [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.493 188707 INFO nova.compute.manager [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Terminating instance
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.495 188707 DEBUG nova.compute.manager [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:11:40 compute-0 kernel: tap89307b57-fe (unregistering): left promiscuous mode
Feb 24 16:11:40 compute-0 NetworkManager[56995]: <info>  [1771949500.5455] device (tap89307b57-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:11:40 compute-0 ovn_controller[98701]: 2026-02-24T16:11:40Z|00156|binding|INFO|Releasing lport 89307b57-fe85-45b9-b123-781c385e8fec from this chassis (sb_readonly=0)
Feb 24 16:11:40 compute-0 ovn_controller[98701]: 2026-02-24T16:11:40Z|00157|binding|INFO|Setting lport 89307b57-fe85-45b9-b123-781c385e8fec down in Southbound
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.559 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 ovn_controller[98701]: 2026-02-24T16:11:40Z|00158|binding|INFO|Removing iface tap89307b57-fe ovn-installed in OVS
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.566 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.578 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:cd:7f 10.100.0.13'], port_security=['fa:16:3e:0c:cd:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fc3a62d6-b05f-4032-a883-8c231d29ff29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d42735c7eb84888b6c3dca096466e04', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0df7c72f-5c7a-4af5-b1f1-1b6470a83b83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2db2ff8a-782e-4e32-b2de-a44ea0ff97e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=89307b57-fe85-45b9-b123-781c385e8fec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.582 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 89307b57-fe85-45b9-b123-781c385e8fec in datapath aeadce2d-53c4-4727-bbc6-e1191df0ffea unbound from our chassis
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.587 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.588 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aeadce2d-53c4-4727-bbc6-e1191df0ffea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.589 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[f3300d94-4d3e-496a-b0b0-34be80466a78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.590 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea namespace which is not needed anymore
Feb 24 16:11:40 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 24 16:11:40 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 46.495s CPU time.
Feb 24 16:11:40 compute-0 systemd-machined[158049]: Machine qemu-10-instance-0000000a terminated.
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.720 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.725 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea[253419]: [NOTICE]   (253424) : haproxy version is 2.8.14-c23fe91
Feb 24 16:11:40 compute-0 neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea[253419]: [NOTICE]   (253424) : path to executable is /usr/sbin/haproxy
Feb 24 16:11:40 compute-0 neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea[253419]: [WARNING]  (253424) : Exiting Master process...
Feb 24 16:11:40 compute-0 neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea[253419]: [WARNING]  (253424) : Exiting Master process...
Feb 24 16:11:40 compute-0 neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea[253419]: [ALERT]    (253424) : Current worker (253426) exited with code 143 (Terminated)
Feb 24 16:11:40 compute-0 neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea[253419]: [WARNING]  (253424) : All workers exited. Exiting... (0)
Feb 24 16:11:40 compute-0 systemd[1]: libpod-c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97.scope: Deactivated successfully.
Feb 24 16:11:40 compute-0 conmon[253419]: conmon c9b3908d4e2c0db2f5c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97.scope/container/memory.events
Feb 24 16:11:40 compute-0 podman[255084]: 2026-02-24 16:11:40.742273814 +0000 UTC m=+0.055867816 container died c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.763 188707 INFO nova.virt.libvirt.driver [-] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Instance destroyed successfully.
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.764 188707 DEBUG nova.objects.instance [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lazy-loading 'resources' on Instance uuid fc3a62d6-b05f-4032-a883-8c231d29ff29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.784 188707 DEBUG nova.virt.libvirt.vif [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:09:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1860691187',display_name='tempest-TestNetworkBasicOps-server-1860691187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1860691187',id=10,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPWQxWtIV9FMssvPhsJSS0b43cQ+JeTN5OmMh7ANpmE26YYNPcHmkssbLiZupNMfTv7+TFqDL55tdsAqB5HmEAQshKtXfoH8ypUBR8AFOF1LF0BF4BWy/RQntVsycbSHuQ==',key_name='tempest-TestNetworkBasicOps-1833619502',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:09:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d42735c7eb84888b6c3dca096466e04',ramdisk_id='',reservation_id='r-3kr6aclc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2112956786',owner_user_name='tempest-TestNetworkBasicOps-2112956786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:09:48Z,user_data=None,user_id='7cec00195bca4d15bbb0449e21faedcf',uuid=fc3a62d6-b05f-4032-a883-8c231d29ff29,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97-userdata-shm.mount: Deactivated successfully.
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.785 188707 DEBUG nova.network.os_vif_util [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converting VIF {"id": "89307b57-fe85-45b9-b123-781c385e8fec", "address": "fa:16:3e:0c:cd:7f", "network": {"id": "aeadce2d-53c4-4727-bbc6-e1191df0ffea", "bridge": "br-int", "label": "tempest-network-smoke--1718360958", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d42735c7eb84888b6c3dca096466e04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89307b57-fe", "ovs_interfaceid": "89307b57-fe85-45b9-b123-781c385e8fec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.787 188707 DEBUG nova.network.os_vif_util [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:cd:7f,bridge_name='br-int',has_traffic_filtering=True,id=89307b57-fe85-45b9-b123-781c385e8fec,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89307b57-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.787 188707 DEBUG os_vif [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:cd:7f,bridge_name='br-int',has_traffic_filtering=True,id=89307b57-fe85-45b9-b123-781c385e8fec,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89307b57-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-940981fb5d2b8a413d7fff2f4e0ebc8b1e75429d070e17f9456a6798c7aab8ed-merged.mount: Deactivated successfully.
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.789 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.789 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89307b57-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.791 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.793 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.796 188707 INFO os_vif [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:cd:7f,bridge_name='br-int',has_traffic_filtering=True,id=89307b57-fe85-45b9-b123-781c385e8fec,network=Network(aeadce2d-53c4-4727-bbc6-e1191df0ffea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89307b57-fe')
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.796 188707 INFO nova.virt.libvirt.driver [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Deleting instance files /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29_del
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.797 188707 INFO nova.virt.libvirt.driver [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Deletion of /var/lib/nova/instances/fc3a62d6-b05f-4032-a883-8c231d29ff29_del complete
Feb 24 16:11:40 compute-0 podman[255084]: 2026-02-24 16:11:40.797633794 +0000 UTC m=+0.111227786 container cleanup c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:11:40 compute-0 systemd[1]: libpod-conmon-c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97.scope: Deactivated successfully.
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.857 188707 INFO nova.compute.manager [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Took 0.36 seconds to destroy the instance on the hypervisor.
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.858 188707 DEBUG oslo.service.loopingcall [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.859 188707 DEBUG nova.compute.manager [-] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.859 188707 DEBUG nova.network.neutron [-] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:11:40 compute-0 podman[255124]: 2026-02-24 16:11:40.900353355 +0000 UTC m=+0.078727348 container remove c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.906 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3cae42-d5e7-4833-8eb9-5ae010c98003]: (4, ('Tue Feb 24 04:11:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea (c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97)\nc9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97\nTue Feb 24 04:11:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea (c9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97)\nc9b3908d4e2c0db2f5c6c56cbd37a849ea9085e0892f115bca800eb5ef09cd97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.908 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb5809b-e623-4928-9206-d61916258909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.909 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaeadce2d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.911 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 kernel: tapaeadce2d-50: left promiscuous mode
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.916 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.919 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[181b2cbc-c699-438f-acd5-30ebbaf70ce6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:40 compute-0 nova_compute[188703]: 2026-02-24 16:11:40.928 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.934 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[28d0c6cf-4237-4363-9adc-088cc7f93490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.935 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[129e9c29-6e25-4360-9c2e-caab2a30d4ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.950 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[332fa16c-d316-435d-a360-7f6c5c841041]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509918, 'reachable_time': 36804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255136, 'error': None, 'target': 'ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.952 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aeadce2d-53c4-4727-bbc6-e1191df0ffea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:11:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:40.953 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[14b1b394-acff-411a-b5d2-6b7d36c894d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:11:40 compute-0 systemd[1]: run-netns-ovnmeta\x2daeadce2d\x2d53c4\x2d4727\x2dbbc6\x2de1191df0ffea.mount: Deactivated successfully.
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.077 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Tue, 24 Feb 2026 16:11:40 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-33ba3a7a-8b93-49e0-a712-73aa67a8ca02 x-openstack-request-id: req-33ba3a7a-8b93-49e0-a712-73aa67a8ca02 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.078 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124", "name": "te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm", "status": "ACTIVE", "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "user_id": "69d3eddd2a7d49bf9a69e0ccbb00f957", "metadata": {"metering.server_group": "677c1c47-5c86-4e10-835b-809c15045b3b"}, "hostId": "d3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b", "image": {"id": "c4831085-6e4d-4710-9d1c-263fd9bf6235", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/c4831085-6e4d-4710-9d1c-263fd9bf6235"}]}, "flavor": {"id": "3303ac8b-27ad-4047-abf8-38e38cd23b6f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/3303ac8b-27ad-4047-abf8-38e38cd23b6f"}]}, "created": "2026-02-24T16:09:51Z", "updated": "2026-02-24T16:10:01Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.165", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ab:a3:60"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-24T16:10:01.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.078 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 used request id req-33ba3a7a-8b93-49e0-a712-73aa67a8ca02 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.079 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124', 'name': 'te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.082 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4b3df01c-0b2d-42c4-90f7-59c995377765 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.082 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4b3df01c-0b2d-42c4-90f7-59c995377765 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.173 188707 DEBUG nova.compute.manager [req-636fc450-05e1-46dd-97e7-fbba6a783bcd req-c5ea266b-108c-4ece-9946-32c38e621657 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received event network-vif-unplugged-89307b57-fe85-45b9-b123-781c385e8fec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.174 188707 DEBUG oslo_concurrency.lockutils [req-636fc450-05e1-46dd-97e7-fbba6a783bcd req-c5ea266b-108c-4ece-9946-32c38e621657 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.174 188707 DEBUG oslo_concurrency.lockutils [req-636fc450-05e1-46dd-97e7-fbba6a783bcd req-c5ea266b-108c-4ece-9946-32c38e621657 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.175 188707 DEBUG oslo_concurrency.lockutils [req-636fc450-05e1-46dd-97e7-fbba6a783bcd req-c5ea266b-108c-4ece-9946-32c38e621657 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.175 188707 DEBUG nova.compute.manager [req-636fc450-05e1-46dd-97e7-fbba6a783bcd req-c5ea266b-108c-4ece-9946-32c38e621657 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] No waiting events found dispatching network-vif-unplugged-89307b57-fe85-45b9-b123-781c385e8fec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.175 188707 DEBUG nova.compute.manager [req-636fc450-05e1-46dd-97e7-fbba6a783bcd req-c5ea266b-108c-4ece-9946-32c38e621657 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received event network-vif-unplugged-89307b57-fe85-45b9-b123-781c385e8fec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.476 188707 DEBUG nova.network.neutron [-] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.505 188707 INFO nova.compute.manager [-] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Took 0.65 seconds to deallocate network for instance.
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.561 188707 DEBUG oslo_concurrency.lockutils [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.562 188707 DEBUG oslo_concurrency.lockutils [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.686 188707 DEBUG nova.compute.provider_tree [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.709 188707 DEBUG nova.scheduler.client.report [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.913 188707 DEBUG oslo_concurrency.lockutils [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.934 188707 INFO nova.scheduler.client.report [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Deleted allocations for instance fc3a62d6-b05f-4032-a883-8c231d29ff29
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.965 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2085 Content-Type: application/json Date: Tue, 24 Feb 2026 16:11:41 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-10f90cc3-2e6d-453f-bd83-445c80443a7b x-openstack-request-id: req-10f90cc3-2e6d-453f-bd83-445c80443a7b _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.965 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4b3df01c-0b2d-42c4-90f7-59c995377765", "name": "tempest-TestServerBasicOps-server-1047622473", "status": "ACTIVE", "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "user_id": "5cceae9386a64ff6b1ff736d2a86285f", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "64ebe548a0baef7ae6d06a994f4ad01280d5c8ca2dc155e5d5dd48b2", "image": {"id": "ee41af80-6a60-4735-8135-3a06de2a36b2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ee41af80-6a60-4735-8135-3a06de2a36b2"}]}, "flavor": {"id": "3303ac8b-27ad-4047-abf8-38e38cd23b6f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/3303ac8b-27ad-4047-abf8-38e38cd23b6f"}]}, "created": "2026-02-24T16:10:45Z", "updated": "2026-02-24T16:10:55Z", "addresses": {"tempest-TestServerBasicOps-646715538-network": [{"version": 4, "addr": "10.100.0.12", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:10:46:22"}, {"version": 4, "addr": "192.168.122.221", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:10:46:22"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4b3df01c-0b2d-42c4-90f7-59c995377765"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4b3df01c-0b2d-42c4-90f7-59c995377765"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-1767552992", "OS-SRV-USG:launched_at": "2026-02-24T16:10:55.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1614994157"}, {"name": "tempest-securitygroup--1287084466"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.965 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4b3df01c-0b2d-42c4-90f7-59c995377765 used request id req-10f90cc3-2e6d-453f-bd83-445c80443a7b request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.967 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4b3df01c-0b2d-42c4-90f7-59c995377765', 'name': 'tempest-TestServerBasicOps-server-1047622473', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ee41af80-6a60-4735-8135-3a06de2a36b2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a652d479d5204330b31c0f67ffd65a20', 'user_id': '5cceae9386a64ff6b1ff736d2a86285f', 'hostId': '64ebe548a0baef7ae6d06a994f4ad01280d5c8ca2dc155e5d5dd48b2', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'fc3a62d6-b05f-4032-a883-8c231d29ff29' (instance-0000000a)
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'fc3a62d6-b05f-4032-a883-8c231d29ff29' (instance-0000000a)
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.969 14 ERROR ceilometer.compute.virt.libvirt.utils [-] Fail to get domain uuid fc3a62d6-b05f-4032-a883-8c231d29ff29 metadata, libvirtError: Domain not found: no domain with matching uuid 'fc3a62d6-b05f-4032-a883-8c231d29ff29' (instance-0000000a)
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.969 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.969 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.969 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.970 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T16:11:41.970010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:41 compute-0 nova_compute[188703]: 2026-02-24 16:11:41.992 188707 DEBUG oslo_concurrency.lockutils [None req-4cc782a9-bb97-40cf-9e94-5e521658eee9 7cec00195bca4d15bbb0449e21faedcf 6d42735c7eb84888b6c3dca096466e04 - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:41 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:41.998 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/memory.usage volume: 42.2734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.019 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/memory.usage volume: 47.38671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.041 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/memory.usage volume: 42.59765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T16:11:42.042299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.057 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.allocation volume: 30679040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.058 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.075 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.076 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.089 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.090 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.091 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T16:11:42.091321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.095 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for e365caeb-efd7-437b-aa10-e579f7c99f2b / tap1c040558-99 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.095 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.100 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 / tap06aae3cb-60 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.100 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.103 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4b3df01c-0b2d-42c4-90f7-59c995377765 / tap540b66b9-f0 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.103 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.104 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.incoming.bytes volume: 1431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.104 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.105 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.105 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T16:11:42.104590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.106 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T16:11:42.106253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.106 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.106 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.107 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.107 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.107 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.108 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.108 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.108 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.109 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.109 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.109 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.110 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.110 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T16:11:42.107731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.110 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T16:11:42.109246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T16:11:42.111652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 nova_compute[188703]: 2026-02-24 16:11:42.124 188707 DEBUG nova.compute.manager [req-fda9c3de-b2bb-4199-872a-01432df2cc7f req-e4d31f71-35cf-457b-9cc2-ba8b0966ce09 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received event network-vif-deleted-89307b57-fe85-45b9-b123-781c385e8fec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:42 compute-0 podman[255137]: 2026-02-24 16:11:42.125394433 +0000 UTC m=+0.086540724 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.152 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.read.bytes volume: 32036864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.152 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.185 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 29330432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.186 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.222 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.read.bytes volume: 31017472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.222 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.223 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.224 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.224 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.224 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.225 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T16:11:42.224378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.225 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.226 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.226 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.226 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.227 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.227 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.read.latency volume: 1256489402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.227 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.read.latency volume: 93423509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T16:11:42.227098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.228 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 1037277216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.228 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 103970129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.228 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.read.latency volume: 1114857020 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.229 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.read.latency volume: 79254764 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.230 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.230 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/cpu volume: 33600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.231 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/cpu volume: 98510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.231 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/cpu volume: 31500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T16:11:42.230697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.232 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.232 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.232 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.233 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.233 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.read.requests volume: 1212 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T16:11:42.233116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.233 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.234 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 1048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.234 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.234 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.read.requests volume: 1135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.235 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.235 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.235 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.236 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.236 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.236 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.236 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.237 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.237 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.237 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T16:11:42.236598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.238 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.238 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.239 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.239 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.239 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T16:11:42.239402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.240 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.240 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.241 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.241 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.241 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.241 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.242 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.242 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T16:11:42.242048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.242 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.243 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.243 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.243 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.244 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.244 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.244 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.245 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.245 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.245 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.245 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T16:11:42.245763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.246 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.write.latency volume: 49292590 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.246 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.246 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 3781114837 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.247 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.247 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.write.latency volume: 4229126733 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.247 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.248 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.248 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.248 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.249 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.249 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.249 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.250 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.write.bytes volume: 278528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T16:11:42.249625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.250 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.250 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.251 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.251 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.write.bytes volume: 72966144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.251 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.252 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.252 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.253 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.253 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.253 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.253 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-24T16:11:42.253507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.254 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1465534534>, <NovaLikeServer: te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm>, <NovaLikeServer: tempest-TestServerBasicOps-server-1047622473>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1465534534>, <NovaLikeServer: te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm>, <NovaLikeServer: tempest-TestServerBasicOps-server-1047622473>]
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.255 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.255 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.255 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.255 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.255 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.write.requests volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.256 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T16:11:42.255674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.256 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.256 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 316 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.257 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.257 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.write.requests volume: 284 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.257 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.258 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.259 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.259 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.260 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.260 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T16:11:42.259639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.260 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.260 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.261 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.262 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.262 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.262 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T16:11:42.262564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.263 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.264 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.264 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.265 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.265 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T16:11:42.265778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.266 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.267 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.267 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.268 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T16:11:42.268047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.269 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.270 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-24T16:11:42.270279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.271 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1465534534>, <NovaLikeServer: te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm>, <NovaLikeServer: tempest-TestServerBasicOps-server-1047622473>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1465534534>, <NovaLikeServer: te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm>, <NovaLikeServer: tempest-TestServerBasicOps-server-1047622473>]
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.271 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.271 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.272 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T16:11:42.272208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.273 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.273 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.274 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T16:11:42.274793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.275 14 DEBUG ceilometer.compute.pollsters [-] e365caeb-efd7-437b-aa10-e579f7c99f2b/network.outgoing.bytes volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.276 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.276 14 DEBUG ceilometer.compute.pollsters [-] 4b3df01c-0b2d-42c4-90f7-59c995377765/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.276 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.279 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.279 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.279 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.280 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.280 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.280 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.280 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.280 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:11:42.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:11:43 compute-0 nova_compute[188703]: 2026-02-24 16:11:43.314 188707 DEBUG nova.compute.manager [req-2f10e9f2-3e33-455e-b1e6-549c2ef57851 req-5bd9b444-9627-4283-86bf-40b78f328585 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received event network-vif-plugged-89307b57-fe85-45b9-b123-781c385e8fec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:11:43 compute-0 nova_compute[188703]: 2026-02-24 16:11:43.315 188707 DEBUG oslo_concurrency.lockutils [req-2f10e9f2-3e33-455e-b1e6-549c2ef57851 req-5bd9b444-9627-4283-86bf-40b78f328585 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:43 compute-0 nova_compute[188703]: 2026-02-24 16:11:43.316 188707 DEBUG oslo_concurrency.lockutils [req-2f10e9f2-3e33-455e-b1e6-549c2ef57851 req-5bd9b444-9627-4283-86bf-40b78f328585 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:43 compute-0 nova_compute[188703]: 2026-02-24 16:11:43.317 188707 DEBUG oslo_concurrency.lockutils [req-2f10e9f2-3e33-455e-b1e6-549c2ef57851 req-5bd9b444-9627-4283-86bf-40b78f328585 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "fc3a62d6-b05f-4032-a883-8c231d29ff29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:43 compute-0 nova_compute[188703]: 2026-02-24 16:11:43.318 188707 DEBUG nova.compute.manager [req-2f10e9f2-3e33-455e-b1e6-549c2ef57851 req-5bd9b444-9627-4283-86bf-40b78f328585 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] No waiting events found dispatching network-vif-plugged-89307b57-fe85-45b9-b123-781c385e8fec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:11:43 compute-0 nova_compute[188703]: 2026-02-24 16:11:43.318 188707 WARNING nova.compute.manager [req-2f10e9f2-3e33-455e-b1e6-549c2ef57851 req-5bd9b444-9627-4283-86bf-40b78f328585 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Received unexpected event network-vif-plugged-89307b57-fe85-45b9-b123-781c385e8fec for instance with vm_state deleted and task_state None.
Feb 24 16:11:43 compute-0 nova_compute[188703]: 2026-02-24 16:11:43.585 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:45 compute-0 nova_compute[188703]: 2026-02-24 16:11:45.793 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:48 compute-0 ovn_controller[98701]: 2026-02-24T16:11:48Z|00159|binding|INFO|Releasing lport f6294507-2873-4400-bee9-b9ef626c0371 from this chassis (sb_readonly=0)
Feb 24 16:11:48 compute-0 ovn_controller[98701]: 2026-02-24T16:11:48Z|00160|binding|INFO|Releasing lport d50b9c1e-a71e-49f6-a0bc-95207c7d9dc7 from this chassis (sb_readonly=0)
Feb 24 16:11:48 compute-0 ovn_controller[98701]: 2026-02-24T16:11:48Z|00161|binding|INFO|Releasing lport 0f982f60-a551-4bd9-8329-8decd220388f from this chassis (sb_readonly=0)
Feb 24 16:11:48 compute-0 nova_compute[188703]: 2026-02-24 16:11:48.582 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:48 compute-0 nova_compute[188703]: 2026-02-24 16:11:48.587 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:50 compute-0 nova_compute[188703]: 2026-02-24 16:11:50.798 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:51 compute-0 nova_compute[188703]: 2026-02-24 16:11:51.093 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771949496.0916586, 511a6d08-b421-4fcb-bb1a-13d6ee450a2d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:11:51 compute-0 nova_compute[188703]: 2026-02-24 16:11:51.093 188707 INFO nova.compute.manager [-] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] VM Stopped (Lifecycle Event)
Feb 24 16:11:51 compute-0 nova_compute[188703]: 2026-02-24 16:11:51.119 188707 DEBUG nova.compute.manager [None req-ba5bdf10-7cf4-4a2f-a0de-75be12c3ab9b - - - - - -] [instance: 511a6d08-b421-4fcb-bb1a-13d6ee450a2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:11:51 compute-0 podman[255163]: 2026-02-24 16:11:51.121569034 +0000 UTC m=+0.079577771 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:11:51 compute-0 podman[255164]: 2026-02-24 16:11:51.143310225 +0000 UTC m=+0.091290625 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Feb 24 16:11:53 compute-0 nova_compute[188703]: 2026-02-24 16:11:53.590 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:55 compute-0 podman[255204]: 2026-02-24 16:11:55.134518858 +0000 UTC m=+0.083194701 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi)
Feb 24 16:11:55 compute-0 podman[255203]: 2026-02-24 16:11:55.142742825 +0000 UTC m=+0.100274853 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, distribution-scope=public, architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Feb 24 16:11:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:55.739 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:11:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:55.740 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:11:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:11:55.740 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:11:55 compute-0 nova_compute[188703]: 2026-02-24 16:11:55.760 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771949500.7576885, fc3a62d6-b05f-4032-a883-8c231d29ff29 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:11:55 compute-0 nova_compute[188703]: 2026-02-24 16:11:55.760 188707 INFO nova.compute.manager [-] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] VM Stopped (Lifecycle Event)
Feb 24 16:11:55 compute-0 nova_compute[188703]: 2026-02-24 16:11:55.787 188707 DEBUG nova.compute.manager [None req-ffe7d932-dee8-4d29-9013-2d5ff4f2ed39 - - - - - -] [instance: fc3a62d6-b05f-4032-a883-8c231d29ff29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:11:55 compute-0 nova_compute[188703]: 2026-02-24 16:11:55.803 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:58 compute-0 nova_compute[188703]: 2026-02-24 16:11:58.594 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:58 compute-0 nova_compute[188703]: 2026-02-24 16:11:58.622 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:11:59 compute-0 podman[204685]: time="2026-02-24T16:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:11:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31703 "" "Go-http-client/1.1"
Feb 24 16:11:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5305 "" "Go-http-client/1.1"
Feb 24 16:12:00 compute-0 nova_compute[188703]: 2026-02-24 16:12:00.807 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:01 compute-0 podman[255245]: 2026-02-24 16:12:01.1346679 +0000 UTC m=+0.098015490 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1770267347, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, distribution-scope=public, name=ubi9/ubi-minimal, vcs-type=git, version=9.7, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Feb 24 16:12:01 compute-0 openstack_network_exporter[207830]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:12:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:12:01 compute-0 openstack_network_exporter[207830]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:12:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:12:03 compute-0 nova_compute[188703]: 2026-02-24 16:12:03.597 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:05 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:05.734 108420 DEBUG eventlet.wsgi.server [-] (108420) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Feb 24 16:12:05 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:05.736 108420 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Feb 24 16:12:05 compute-0 ovn_metadata_agent[108021]: Accept: */*
Feb 24 16:12:05 compute-0 ovn_metadata_agent[108021]: Connection: close
Feb 24 16:12:05 compute-0 ovn_metadata_agent[108021]: Content-Type: text/plain
Feb 24 16:12:05 compute-0 ovn_metadata_agent[108021]: Host: 169.254.169.254
Feb 24 16:12:05 compute-0 ovn_metadata_agent[108021]: User-Agent: curl/7.84.0
Feb 24 16:12:05 compute-0 ovn_metadata_agent[108021]: X-Forwarded-For: 10.100.0.12
Feb 24 16:12:05 compute-0 ovn_metadata_agent[108021]: X-Ovn-Network-Id: 1a08fe3b-f918-47ba-ae63-ee103c7afcba __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Feb 24 16:12:05 compute-0 nova_compute[188703]: 2026-02-24 16:12:05.811 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:07.041 108420 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:07.042 108420 INFO eventlet.wsgi.server [-] 10.100.0.12,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.3060043
Feb 24 16:12:07 compute-0 haproxy-metadata-proxy-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254496]: 10.100.0.12:57560 [24/Feb/2026:16:12:05.733] listener listener/metadata 0/0/0/1308/1308 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:07.200 108420 DEBUG eventlet.wsgi.server [-] (108420) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:07.201 108420 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: Accept: */*
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: Connection: close
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: Content-Length: 100
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: Content-Type: application/x-www-form-urlencoded
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: Host: 169.254.169.254
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: User-Agent: curl/7.84.0
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: X-Forwarded-For: 10.100.0.12
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: X-Ovn-Network-Id: 1a08fe3b-f918-47ba-ae63-ee103c7afcba
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: 
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:07.445 108420 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Feb 24 16:12:07 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:07.445 108420 INFO eventlet.wsgi.server [-] 10.100.0.12,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2441945
Feb 24 16:12:07 compute-0 haproxy-metadata-proxy-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254496]: 10.100.0.12:56544 [24/Feb/2026:16:12:07.199] listener listener/metadata 0/0/0/245/245 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Feb 24 16:12:08 compute-0 podman[255266]: 2026-02-24 16:12:08.186838988 +0000 UTC m=+0.133551273 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 16:12:08 compute-0 podman[255267]: 2026-02-24 16:12:08.209190006 +0000 UTC m=+0.152246471 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.213 188707 DEBUG oslo_concurrency.lockutils [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.214 188707 DEBUG oslo_concurrency.lockutils [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.214 188707 DEBUG oslo_concurrency.lockutils [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.214 188707 DEBUG oslo_concurrency.lockutils [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.215 188707 DEBUG oslo_concurrency.lockutils [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.216 188707 INFO nova.compute.manager [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Terminating instance
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.218 188707 DEBUG nova.compute.manager [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:12:08 compute-0 kernel: tap1c040558-99 (unregistering): left promiscuous mode
Feb 24 16:12:08 compute-0 NetworkManager[56995]: <info>  [1771949528.2591] device (tap1c040558-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:12:08 compute-0 ovn_controller[98701]: 2026-02-24T16:12:08Z|00162|binding|INFO|Releasing lport 1c040558-99c8-40bd-8b21-1337faca7edc from this chassis (sb_readonly=0)
Feb 24 16:12:08 compute-0 ovn_controller[98701]: 2026-02-24T16:12:08Z|00163|binding|INFO|Setting lport 1c040558-99c8-40bd-8b21-1337faca7edc down in Southbound
Feb 24 16:12:08 compute-0 ovn_controller[98701]: 2026-02-24T16:12:08Z|00164|binding|INFO|Removing iface tap1c040558-99 ovn-installed in OVS
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.272 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.278 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.282 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:c9:69 10.100.0.10'], port_security=['fa:16:3e:30:c9:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e365caeb-efd7-437b-aa10-e579f7c99f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b63b7c206004c42b699bdc42c129b6b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '302c0bad-634d-4905-abc7-a5c548d119ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92707d4c-a464-49d7-8f37-7fa0e55d12a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=1c040558-99c8-40bd-8b21-1337faca7edc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.284 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 1c040558-99c8-40bd-8b21-1337faca7edc in datapath 617264bd-8d71-44c7-9bb9-ef21a37be5eb unbound from our chassis
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.288 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 617264bd-8d71-44c7-9bb9-ef21a37be5eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.289 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[34e30108-58db-41dc-8c15-9d74caccb741]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.290 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb namespace which is not needed anymore
Feb 24 16:12:08 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000009.scope: Deactivated successfully.
Feb 24 16:12:08 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000009.scope: Consumed 42.105s CPU time.
Feb 24 16:12:08 compute-0 systemd-machined[158049]: Machine qemu-15-instance-00000009 terminated.
Feb 24 16:12:08 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[254717]: [NOTICE]   (254722) : haproxy version is 2.8.14-c23fe91
Feb 24 16:12:08 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[254717]: [NOTICE]   (254722) : path to executable is /usr/sbin/haproxy
Feb 24 16:12:08 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[254717]: [WARNING]  (254722) : Exiting Master process...
Feb 24 16:12:08 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[254717]: [WARNING]  (254722) : Exiting Master process...
Feb 24 16:12:08 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[254717]: [ALERT]    (254722) : Current worker (254724) exited with code 143 (Terminated)
Feb 24 16:12:08 compute-0 neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb[254717]: [WARNING]  (254722) : All workers exited. Exiting... (0)
Feb 24 16:12:08 compute-0 systemd[1]: libpod-cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1.scope: Deactivated successfully.
Feb 24 16:12:08 compute-0 podman[255327]: 2026-02-24 16:12:08.431755589 +0000 UTC m=+0.054130948 container died cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.445 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.453 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.483 188707 INFO nova.virt.libvirt.driver [-] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Instance destroyed successfully.
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.484 188707 DEBUG nova.objects.instance [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lazy-loading 'resources' on Instance uuid e365caeb-efd7-437b-aa10-e579f7c99f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-06060071211af2dca8e3b97cc4ce3564d07c9558231166af11ca4478967ae66d-merged.mount: Deactivated successfully.
Feb 24 16:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1-userdata-shm.mount: Deactivated successfully.
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.514 188707 DEBUG nova.virt.libvirt.vif [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1465534534',display_name='tempest-ServerActionsTestJSON-server-1465534534',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1465534534',id=9,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZyYKPc4V6M0wvJAG9T+xK5LUSQ9T3A/higDLxTyiLxe53PGIkxY4Fvqb7KzGKM0zXSbG9tTOZZ45MmiyEiALztFvtXt0JRIVYKiHXk5B1tyWpIojmBc9p6KFCMGGybeQ==',key_name='tempest-keypair-2037101796',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:09:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b63b7c206004c42b699bdc42c129b6b',ramdisk_id='',reservation_id='r-5riwlf7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1577843196',owner_user_name='tempest-ServerActionsTestJSON-1577843196-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:11:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e3fc739339a5496cb9c0e2e0eebefd55',uuid=e365caeb-efd7-437b-aa10-e579f7c99f2b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.514 188707 DEBUG nova.network.os_vif_util [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converting VIF {"id": "1c040558-99c8-40bd-8b21-1337faca7edc", "address": "fa:16:3e:30:c9:69", "network": {"id": "617264bd-8d71-44c7-9bb9-ef21a37be5eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1538167623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b63b7c206004c42b699bdc42c129b6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c040558-99", "ovs_interfaceid": "1c040558-99c8-40bd-8b21-1337faca7edc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.516 188707 DEBUG nova.network.os_vif_util [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.517 188707 DEBUG os_vif [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:12:08 compute-0 podman[255327]: 2026-02-24 16:12:08.518264391 +0000 UTC m=+0.140639760 container cleanup cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.519 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.520 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c040558-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.529 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:12:08 compute-0 systemd[1]: libpod-conmon-cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1.scope: Deactivated successfully.
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.532 188707 INFO os_vif [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=1c040558-99c8-40bd-8b21-1337faca7edc,network=Network(617264bd-8d71-44c7-9bb9-ef21a37be5eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c040558-99')
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.533 188707 INFO nova.virt.libvirt.driver [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Deleting instance files /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b_del
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.534 188707 INFO nova.virt.libvirt.driver [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Deletion of /var/lib/nova/instances/e365caeb-efd7-437b-aa10-e579f7c99f2b_del complete
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.600 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.615 188707 INFO nova.compute.manager [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Took 0.40 seconds to destroy the instance on the hypervisor.
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.616 188707 DEBUG oslo.service.loopingcall [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.616 188707 DEBUG nova.compute.manager [-] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.617 188707 DEBUG nova.network.neutron [-] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.619 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 podman[255371]: 2026-02-24 16:12:08.622373849 +0000 UTC m=+0.071965100 container remove cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.627 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3b58e7-9865-452a-aee1-81e92d0d7478]: (4, ('Tue Feb 24 04:12:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb (cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1)\ncfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1\nTue Feb 24 04:12:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb (cfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1)\ncfa67df35fcdf1bb598ccfd9f61f11702b60036fc11951e131cb799d0c7532e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.629 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2489c3cf-c201-48f4-9011-d0f7202ebf4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.630 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap617264bd-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.632 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 kernel: tap617264bd-80: left promiscuous mode
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.635 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.637 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[10e3907f-6815-4d5a-b68f-77362533f05f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:08 compute-0 nova_compute[188703]: 2026-02-24 16:12:08.646 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.652 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[ce56ac06-ae98-4c3c-b956-10b051678083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.653 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4b91df-66b9-4830-bbd7-cd797177c9e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.668 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[597d27c6-bb73-46d2-92e1-c08d55260579]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517378, 'reachable_time': 24187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255385, 'error': None, 'target': 'ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.671 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-617264bd-8d71-44c7-9bb9-ef21a37be5eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:12:08 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:08.671 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[61afc974-d8aa-48c1-b09a-ede1c525fe4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d617264bd\x2d8d71\x2d44c7\x2d9bb9\x2def21a37be5eb.mount: Deactivated successfully.
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.365 188707 DEBUG nova.network.neutron [-] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.390 188707 INFO nova.compute.manager [-] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Took 0.77 seconds to deallocate network for instance.
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.462 188707 DEBUG oslo_concurrency.lockutils [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.463 188707 DEBUG oslo_concurrency.lockutils [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.471 188707 DEBUG nova.compute.manager [req-2b1e55b1-f6ee-499e-9086-a5d7a81edfcb req-4494c960-3e5a-458a-b39d-e7def00b707e 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Received event network-vif-deleted-1c040558-99c8-40bd-8b21-1337faca7edc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.514 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.515 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.576 188707 DEBUG nova.compute.provider_tree [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.592 188707 DEBUG nova.scheduler.client.report [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.617 188707 DEBUG oslo_concurrency.lockutils [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.658 188707 INFO nova.scheduler.client.report [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Deleted allocations for instance e365caeb-efd7-437b-aa10-e579f7c99f2b
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.738 188707 DEBUG oslo_concurrency.lockutils [None req-ac4631a1-56e7-474d-93f6-ef1cc2ffd8be e3fc739339a5496cb9c0e2e0eebefd55 9b63b7c206004c42b699bdc42c129b6b - - default default] Lock "e365caeb-efd7-437b-aa10-e579f7c99f2b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:09 compute-0 nova_compute[188703]: 2026-02-24 16:12:09.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.117 188707 DEBUG oslo_concurrency.lockutils [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "4b3df01c-0b2d-42c4-90f7-59c995377765" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.118 188707 DEBUG oslo_concurrency.lockutils [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.119 188707 DEBUG oslo_concurrency.lockutils [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.121 188707 DEBUG oslo_concurrency.lockutils [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.122 188707 DEBUG oslo_concurrency.lockutils [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.125 188707 INFO nova.compute.manager [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Terminating instance
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.128 188707 DEBUG nova.compute.manager [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:12:10 compute-0 kernel: tap540b66b9-f0 (unregistering): left promiscuous mode
Feb 24 16:12:10 compute-0 NetworkManager[56995]: <info>  [1771949530.1845] device (tap540b66b9-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.185 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:10 compute-0 ovn_controller[98701]: 2026-02-24T16:12:10Z|00165|binding|INFO|Releasing lport 540b66b9-f088-4e4e-bc6a-2c20cee24320 from this chassis (sb_readonly=0)
Feb 24 16:12:10 compute-0 ovn_controller[98701]: 2026-02-24T16:12:10Z|00166|binding|INFO|Setting lport 540b66b9-f088-4e4e-bc6a-2c20cee24320 down in Southbound
Feb 24 16:12:10 compute-0 ovn_controller[98701]: 2026-02-24T16:12:10Z|00167|binding|INFO|Removing iface tap540b66b9-f0 ovn-installed in OVS
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.189 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.201 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:46:22 10.100.0.12'], port_security=['fa:16:3e:10:46:22 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4b3df01c-0b2d-42c4-90f7-59c995377765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a08fe3b-f918-47ba-ae63-ee103c7afcba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a652d479d5204330b31c0f67ffd65a20', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44eef4cc-551f-41af-97e7-bdef67551494 c338f274-5f9a-4760-9c0e-39ecdb6b2b31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb4ba59-322c-4717-9949-bb4517b92bfb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=540b66b9-f088-4e4e-bc6a-2c20cee24320) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.203 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 540b66b9-f088-4e4e-bc6a-2c20cee24320 in datapath 1a08fe3b-f918-47ba-ae63-ee103c7afcba unbound from our chassis
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.205 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1a08fe3b-f918-47ba-ae63-ee103c7afcba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.209 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.215 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[4dce3b46-9a3f-4122-93f0-7c0b5b9c8e23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.216 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba namespace which is not needed anymore
Feb 24 16:12:10 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Feb 24 16:12:10 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 39.820s CPU time.
Feb 24 16:12:10 compute-0 systemd-machined[158049]: Machine qemu-14-instance-0000000e terminated.
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.358 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:10 compute-0 neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254472]: [NOTICE]   (254494) : haproxy version is 2.8.14-c23fe91
Feb 24 16:12:10 compute-0 neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254472]: [NOTICE]   (254494) : path to executable is /usr/sbin/haproxy
Feb 24 16:12:10 compute-0 neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254472]: [WARNING]  (254494) : Exiting Master process...
Feb 24 16:12:10 compute-0 neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254472]: [WARNING]  (254494) : Exiting Master process...
Feb 24 16:12:10 compute-0 neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254472]: [ALERT]    (254494) : Current worker (254496) exited with code 143 (Terminated)
Feb 24 16:12:10 compute-0 neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba[254472]: [WARNING]  (254494) : All workers exited. Exiting... (0)
Feb 24 16:12:10 compute-0 systemd[1]: libpod-904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3.scope: Deactivated successfully.
Feb 24 16:12:10 compute-0 podman[255407]: 2026-02-24 16:12:10.398146763 +0000 UTC m=+0.071861548 container died 904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.400 188707 INFO nova.virt.libvirt.driver [-] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Instance destroyed successfully.
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.400 188707 DEBUG nova.objects.instance [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lazy-loading 'resources' on Instance uuid 4b3df01c-0b2d-42c4-90f7-59c995377765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.414 188707 DEBUG nova.virt.libvirt.vif [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:10:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1047622473',display_name='tempest-TestServerBasicOps-server-1047622473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1047622473',id=14,image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO/x/xq3/i+dOKRq5Q/ez7atKEg1OiLx+Yu5sv7g3D5S0LGXdnA7hoDIuXmRoobrxZzJJNC/MIVEamcvk9li//tVyxf3Y1I4DrDrARu9O2/vUb0QGAUiNV4FNYG24pebJA==',key_name='tempest-TestServerBasicOps-1767552992',keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:10:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a652d479d5204330b31c0f67ffd65a20',ramdisk_id='',reservation_id='r-5t78tkhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ee41af80-6a60-4735-8135-3a06de2a36b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-598118548',owner_user_name='tempest-TestServerBasicOps-598118548-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:12:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5cceae9386a64ff6b1ff736d2a86285f',uuid=4b3df01c-0b2d-42c4-90f7-59c995377765,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.415 188707 DEBUG nova.network.os_vif_util [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Converting VIF {"id": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "address": "fa:16:3e:10:46:22", "network": {"id": "1a08fe3b-f918-47ba-ae63-ee103c7afcba", "bridge": "br-int", "label": "tempest-TestServerBasicOps-646715538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a652d479d5204330b31c0f67ffd65a20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540b66b9-f0", "ovs_interfaceid": "540b66b9-f088-4e4e-bc6a-2c20cee24320", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.416 188707 DEBUG nova.network.os_vif_util [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:46:22,bridge_name='br-int',has_traffic_filtering=True,id=540b66b9-f088-4e4e-bc6a-2c20cee24320,network=Network(1a08fe3b-f918-47ba-ae63-ee103c7afcba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540b66b9-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.417 188707 DEBUG os_vif [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:46:22,bridge_name='br-int',has_traffic_filtering=True,id=540b66b9-f088-4e4e-bc6a-2c20cee24320,network=Network(1a08fe3b-f918-47ba-ae63-ee103c7afcba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540b66b9-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.419 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.420 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540b66b9-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.425 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.429 188707 INFO os_vif [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:46:22,bridge_name='br-int',has_traffic_filtering=True,id=540b66b9-f088-4e4e-bc6a-2c20cee24320,network=Network(1a08fe3b-f918-47ba-ae63-ee103c7afcba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540b66b9-f0')
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.429 188707 INFO nova.virt.libvirt.driver [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Deleting instance files /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765_del
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.430 188707 INFO nova.virt.libvirt.driver [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Deletion of /var/lib/nova/instances/4b3df01c-0b2d-42c4-90f7-59c995377765_del complete
Feb 24 16:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3-userdata-shm.mount: Deactivated successfully.
Feb 24 16:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-50cb1c0f16a8755fbe168ae65a81ae403a7958d7d3d5d1212caacb194b08f57e-merged.mount: Deactivated successfully.
Feb 24 16:12:10 compute-0 podman[255407]: 2026-02-24 16:12:10.45159119 +0000 UTC m=+0.125305985 container cleanup 904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 24 16:12:10 compute-0 systemd[1]: libpod-conmon-904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3.scope: Deactivated successfully.
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.487 188707 INFO nova.compute.manager [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Took 0.36 seconds to destroy the instance on the hypervisor.
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.487 188707 DEBUG oslo.service.loopingcall [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.488 188707 DEBUG nova.compute.manager [-] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.489 188707 DEBUG nova.network.neutron [-] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:12:10 compute-0 podman[255450]: 2026-02-24 16:12:10.533470184 +0000 UTC m=+0.060157945 container remove 904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.539 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[19aa76f1-8e53-4970-8a8d-f954911d3c4b]: (4, ('Tue Feb 24 04:12:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba (904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3)\n904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3\nTue Feb 24 04:12:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba (904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3)\n904b1bede6dd9b45731b7d7759626b2f0b1081618d3d3249208575a5c83583b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.541 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[932eb6a0-78da-4a21-a146-b2a8e8b775ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.542 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a08fe3b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:12:10 compute-0 kernel: tap1a08fe3b-f0: left promiscuous mode
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.548 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.559 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.563 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[64d95914-3a95-407e-b142-f3a8d91a6e6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.582 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[b72459a1-2130-4386-842f-1a3e0a7f4d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.583 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[251a4b2a-9a02-4510-bf29-0c72fefba196]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.594 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[36a881ad-941e-48e5-8a9a-9682501cf947]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516542, 'reachable_time': 16782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255464, 'error': None, 'target': 'ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.596 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1a08fe3b-f918-47ba-ae63-ee103c7afcba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:12:10 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:10.596 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[a943b176-ecc1-4aab-8d38-69ef2a84695b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:12:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d1a08fe3b\x2df918\x2d47ba\x2dae63\x2dee103c7afcba.mount: Deactivated successfully.
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:10 compute-0 nova_compute[188703]: 2026-02-24 16:12:10.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:12:11 compute-0 nova_compute[188703]: 2026-02-24 16:12:11.435 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:12:11 compute-0 nova_compute[188703]: 2026-02-24 16:12:11.436 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:12:11 compute-0 nova_compute[188703]: 2026-02-24 16:12:11.436 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:12:11 compute-0 nova_compute[188703]: 2026-02-24 16:12:11.667 188707 DEBUG nova.compute.manager [req-3b033227-8148-4f1f-af76-9197af7dfc89 req-cc26cdd3-135a-4e72-b459-4dbae84addcf 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received event network-vif-unplugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:12:11 compute-0 nova_compute[188703]: 2026-02-24 16:12:11.668 188707 DEBUG oslo_concurrency.lockutils [req-3b033227-8148-4f1f-af76-9197af7dfc89 req-cc26cdd3-135a-4e72-b459-4dbae84addcf 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:11 compute-0 nova_compute[188703]: 2026-02-24 16:12:11.668 188707 DEBUG oslo_concurrency.lockutils [req-3b033227-8148-4f1f-af76-9197af7dfc89 req-cc26cdd3-135a-4e72-b459-4dbae84addcf 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:11 compute-0 nova_compute[188703]: 2026-02-24 16:12:11.668 188707 DEBUG oslo_concurrency.lockutils [req-3b033227-8148-4f1f-af76-9197af7dfc89 req-cc26cdd3-135a-4e72-b459-4dbae84addcf 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:11 compute-0 nova_compute[188703]: 2026-02-24 16:12:11.669 188707 DEBUG nova.compute.manager [req-3b033227-8148-4f1f-af76-9197af7dfc89 req-cc26cdd3-135a-4e72-b459-4dbae84addcf 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] No waiting events found dispatching network-vif-unplugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:12:11 compute-0 nova_compute[188703]: 2026-02-24 16:12:11.669 188707 DEBUG nova.compute.manager [req-3b033227-8148-4f1f-af76-9197af7dfc89 req-cc26cdd3-135a-4e72-b459-4dbae84addcf 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received event network-vif-unplugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:12:12 compute-0 nova_compute[188703]: 2026-02-24 16:12:12.389 188707 DEBUG nova.network.neutron [-] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:12:12 compute-0 nova_compute[188703]: 2026-02-24 16:12:12.409 188707 INFO nova.compute.manager [-] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Took 1.92 seconds to deallocate network for instance.
Feb 24 16:12:12 compute-0 nova_compute[188703]: 2026-02-24 16:12:12.458 188707 DEBUG oslo_concurrency.lockutils [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:12 compute-0 nova_compute[188703]: 2026-02-24 16:12:12.459 188707 DEBUG oslo_concurrency.lockutils [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:12 compute-0 nova_compute[188703]: 2026-02-24 16:12:12.549 188707 DEBUG nova.compute.provider_tree [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:12:12 compute-0 nova_compute[188703]: 2026-02-24 16:12:12.567 188707 DEBUG nova.scheduler.client.report [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:12:12 compute-0 nova_compute[188703]: 2026-02-24 16:12:12.594 188707 DEBUG oslo_concurrency.lockutils [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:12 compute-0 nova_compute[188703]: 2026-02-24 16:12:12.837 188707 INFO nova.scheduler.client.report [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Deleted allocations for instance 4b3df01c-0b2d-42c4-90f7-59c995377765
Feb 24 16:12:12 compute-0 nova_compute[188703]: 2026-02-24 16:12:12.930 188707 DEBUG oslo_concurrency.lockutils [None req-afe58061-d001-4b4d-a8db-fd88fea6ae3d 5cceae9386a64ff6b1ff736d2a86285f a652d479d5204330b31c0f67ffd65a20 - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:13 compute-0 podman[255465]: 2026-02-24 16:12:13.130370289 +0000 UTC m=+0.091106710 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.297 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.318 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.318 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.318 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.319 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.603 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.780 188707 DEBUG nova.compute.manager [req-43b79a49-a83a-4035-b246-917cb74d74ee req-dc5dd064-4eca-4802-8c53-23b0e8c0dade 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received event network-vif-plugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.780 188707 DEBUG oslo_concurrency.lockutils [req-43b79a49-a83a-4035-b246-917cb74d74ee req-dc5dd064-4eca-4802-8c53-23b0e8c0dade 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.781 188707 DEBUG oslo_concurrency.lockutils [req-43b79a49-a83a-4035-b246-917cb74d74ee req-dc5dd064-4eca-4802-8c53-23b0e8c0dade 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.781 188707 DEBUG oslo_concurrency.lockutils [req-43b79a49-a83a-4035-b246-917cb74d74ee req-dc5dd064-4eca-4802-8c53-23b0e8c0dade 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "4b3df01c-0b2d-42c4-90f7-59c995377765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.782 188707 DEBUG nova.compute.manager [req-43b79a49-a83a-4035-b246-917cb74d74ee req-dc5dd064-4eca-4802-8c53-23b0e8c0dade 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] No waiting events found dispatching network-vif-plugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.782 188707 WARNING nova.compute.manager [req-43b79a49-a83a-4035-b246-917cb74d74ee req-dc5dd064-4eca-4802-8c53-23b0e8c0dade 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received unexpected event network-vif-plugged-540b66b9-f088-4e4e-bc6a-2c20cee24320 for instance with vm_state deleted and task_state None.
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.782 188707 DEBUG nova.compute.manager [req-43b79a49-a83a-4035-b246-917cb74d74ee req-dc5dd064-4eca-4802-8c53-23b0e8c0dade 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Received event network-vif-deleted-540b66b9-f088-4e4e-bc6a-2c20cee24320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:13 compute-0 nova_compute[188703]: 2026-02-24 16:12:13.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:14 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:14.697 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:12:14 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:14.697 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:12:14 compute-0 nova_compute[188703]: 2026-02-24 16:12:14.705 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:14 compute-0 ovn_controller[98701]: 2026-02-24T16:12:14Z|00168|binding|INFO|Releasing lport 0f982f60-a551-4bd9-8329-8decd220388f from this chassis (sb_readonly=0)
Feb 24 16:12:14 compute-0 nova_compute[188703]: 2026-02-24 16:12:14.877 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:14 compute-0 nova_compute[188703]: 2026-02-24 16:12:14.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:14 compute-0 ovn_controller[98701]: 2026-02-24T16:12:14Z|00169|binding|INFO|Releasing lport 0f982f60-a551-4bd9-8329-8decd220388f from this chassis (sb_readonly=0)
Feb 24 16:12:15 compute-0 nova_compute[188703]: 2026-02-24 16:12:15.012 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:15 compute-0 nova_compute[188703]: 2026-02-24 16:12:15.422 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:17 compute-0 nova_compute[188703]: 2026-02-24 16:12:17.956 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:17 compute-0 nova_compute[188703]: 2026-02-24 16:12:17.956 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 16:12:18 compute-0 nova_compute[188703]: 2026-02-24 16:12:18.608 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:18 compute-0 nova_compute[188703]: 2026-02-24 16:12:18.980 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.012 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.013 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.013 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.014 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.119 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.182 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.183 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.235 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.610 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.612 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=72.12841033935547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.613 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.613 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.788 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.789 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.790 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.867 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.885 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.909 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:12:19 compute-0 nova_compute[188703]: 2026-02-24 16:12:19.910 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:20 compute-0 nova_compute[188703]: 2026-02-24 16:12:20.425 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:20 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:20.699 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:12:22 compute-0 podman[255498]: 2026-02-24 16:12:22.133228236 +0000 UTC m=+0.085490305 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:12:22 compute-0 podman[255499]: 2026-02-24 16:12:22.139022736 +0000 UTC m=+0.087248963 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 24 16:12:23 compute-0 nova_compute[188703]: 2026-02-24 16:12:23.480 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771949528.4789538, e365caeb-efd7-437b-aa10-e579f7c99f2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:12:23 compute-0 nova_compute[188703]: 2026-02-24 16:12:23.480 188707 INFO nova.compute.manager [-] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] VM Stopped (Lifecycle Event)
Feb 24 16:12:23 compute-0 nova_compute[188703]: 2026-02-24 16:12:23.495 188707 DEBUG nova.compute.manager [None req-3a44ed35-f9d6-479f-90dd-3b238ab5e5dd - - - - - -] [instance: e365caeb-efd7-437b-aa10-e579f7c99f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:12:23 compute-0 nova_compute[188703]: 2026-02-24 16:12:23.613 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:25 compute-0 nova_compute[188703]: 2026-02-24 16:12:25.391 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771949530.389929, 4b3df01c-0b2d-42c4-90f7-59c995377765 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:12:25 compute-0 nova_compute[188703]: 2026-02-24 16:12:25.392 188707 INFO nova.compute.manager [-] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] VM Stopped (Lifecycle Event)
Feb 24 16:12:25 compute-0 nova_compute[188703]: 2026-02-24 16:12:25.417 188707 DEBUG nova.compute.manager [None req-7a6cea1d-f893-49cd-8b42-8aa897f97556 - - - - - -] [instance: 4b3df01c-0b2d-42c4-90f7-59c995377765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:12:25 compute-0 nova_compute[188703]: 2026-02-24 16:12:25.428 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:25 compute-0 nova_compute[188703]: 2026-02-24 16:12:25.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:12:25 compute-0 nova_compute[188703]: 2026-02-24 16:12:25.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 16:12:25 compute-0 nova_compute[188703]: 2026-02-24 16:12:25.960 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 16:12:26 compute-0 podman[255542]: 2026-02-24 16:12:26.13605363 +0000 UTC m=+0.093226389 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Feb 24 16:12:26 compute-0 podman[255543]: 2026-02-24 16:12:26.149057709 +0000 UTC m=+0.106713391 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 24 16:12:28 compute-0 nova_compute[188703]: 2026-02-24 16:12:28.615 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:29 compute-0 podman[204685]: time="2026-02-24T16:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:12:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:12:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Feb 24 16:12:30 compute-0 nova_compute[188703]: 2026-02-24 16:12:30.433 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:31 compute-0 openstack_network_exporter[207830]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:12:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:12:31 compute-0 openstack_network_exporter[207830]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:12:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:12:32 compute-0 podman[255578]: 2026-02-24 16:12:32.152681698 +0000 UTC m=+0.110336422 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, release=1770267347, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public)
Feb 24 16:12:33 compute-0 nova_compute[188703]: 2026-02-24 16:12:33.617 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:35 compute-0 nova_compute[188703]: 2026-02-24 16:12:35.439 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:38 compute-0 nova_compute[188703]: 2026-02-24 16:12:38.619 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:39 compute-0 podman[255599]: 2026-02-24 16:12:39.155511851 +0000 UTC m=+0.108763407 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 16:12:39 compute-0 podman[255600]: 2026-02-24 16:12:39.213733591 +0000 UTC m=+0.153260229 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 24 16:12:40 compute-0 nova_compute[188703]: 2026-02-24 16:12:40.443 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:43 compute-0 nova_compute[188703]: 2026-02-24 16:12:43.622 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:44 compute-0 podman[255643]: 2026-02-24 16:12:44.14239159 +0000 UTC m=+0.095051569 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:12:45 compute-0 nova_compute[188703]: 2026-02-24 16:12:45.447 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:48 compute-0 nova_compute[188703]: 2026-02-24 16:12:48.625 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:50 compute-0 nova_compute[188703]: 2026-02-24 16:12:50.450 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:52.648 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:12:52 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:52.649 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:12:52 compute-0 nova_compute[188703]: 2026-02-24 16:12:52.650 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:53 compute-0 podman[255668]: 2026-02-24 16:12:53.121906011 +0000 UTC m=+0.070073008 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 24 16:12:53 compute-0 podman[255667]: 2026-02-24 16:12:53.134584992 +0000 UTC m=+0.092154518 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:12:53 compute-0 nova_compute[188703]: 2026-02-24 16:12:53.630 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:55 compute-0 nova_compute[188703]: 2026-02-24 16:12:55.456 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:55.740 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:12:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:55.741 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:12:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:12:55.742 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:12:57 compute-0 podman[255706]: 2026-02-24 16:12:57.177573516 +0000 UTC m=+0.128308198 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=kepler, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, version=9.4, architecture=x86_64, distribution-scope=public, vcs-type=git, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 24 16:12:57 compute-0 podman[255707]: 2026-02-24 16:12:57.186979927 +0000 UTC m=+0.133259456 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 16:12:58 compute-0 nova_compute[188703]: 2026-02-24 16:12:58.635 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:12:59 compute-0 podman[204685]: time="2026-02-24T16:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:12:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:12:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Feb 24 16:13:00 compute-0 nova_compute[188703]: 2026-02-24 16:13:00.460 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:00 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:13:00.652 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:13:01 compute-0 openstack_network_exporter[207830]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:13:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:13:01 compute-0 openstack_network_exporter[207830]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:13:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:13:03 compute-0 podman[255744]: 2026-02-24 16:13:03.147969446 +0000 UTC m=+0.108082459 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, release=1770267347, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Feb 24 16:13:03 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 24 16:13:03 compute-0 nova_compute[188703]: 2026-02-24 16:13:03.637 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:05 compute-0 nova_compute[188703]: 2026-02-24 16:13:05.463 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:08 compute-0 nova_compute[188703]: 2026-02-24 16:13:08.638 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:08 compute-0 nova_compute[188703]: 2026-02-24 16:13:08.961 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:08 compute-0 nova_compute[188703]: 2026-02-24 16:13:08.961 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:09 compute-0 sshd-session[255766]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 16:13:09 compute-0 nova_compute[188703]: 2026-02-24 16:13:09.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:09 compute-0 nova_compute[188703]: 2026-02-24 16:13:09.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:13:10 compute-0 podman[255768]: 2026-02-24 16:13:10.151686254 +0000 UTC m=+0.109738475 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Feb 24 16:13:10 compute-0 podman[255769]: 2026-02-24 16:13:10.191048962 +0000 UTC m=+0.146731738 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 24 16:13:10 compute-0 nova_compute[188703]: 2026-02-24 16:13:10.465 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:10 compute-0 nova_compute[188703]: 2026-02-24 16:13:10.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:10 compute-0 nova_compute[188703]: 2026-02-24 16:13:10.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:13:10 compute-0 nova_compute[188703]: 2026-02-24 16:13:10.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:13:11 compute-0 nova_compute[188703]: 2026-02-24 16:13:11.247 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:13:11 compute-0 nova_compute[188703]: 2026-02-24 16:13:11.247 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:13:11 compute-0 nova_compute[188703]: 2026-02-24 16:13:11.248 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:13:11 compute-0 nova_compute[188703]: 2026-02-24 16:13:11.248 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:13:13 compute-0 nova_compute[188703]: 2026-02-24 16:13:13.437 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:13:13 compute-0 nova_compute[188703]: 2026-02-24 16:13:13.452 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:13:13 compute-0 nova_compute[188703]: 2026-02-24 16:13:13.452 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:13:13 compute-0 nova_compute[188703]: 2026-02-24 16:13:13.641 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:13 compute-0 nova_compute[188703]: 2026-02-24 16:13:13.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:13 compute-0 nova_compute[188703]: 2026-02-24 16:13:13.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:13 compute-0 nova_compute[188703]: 2026-02-24 16:13:13.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:14 compute-0 podman[255808]: 2026-02-24 16:13:14.750409483 +0000 UTC m=+0.073447402 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:13:14 compute-0 nova_compute[188703]: 2026-02-24 16:13:14.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:14 compute-0 nova_compute[188703]: 2026-02-24 16:13:14.974 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:15 compute-0 nova_compute[188703]: 2026-02-24 16:13:15.470 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:17 compute-0 sshd-session[255831]: Connection closed by authenticating user root 172.214.45.193 port 24584 [preauth]
Feb 24 16:13:18 compute-0 nova_compute[188703]: 2026-02-24 16:13:18.647 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:20 compute-0 nova_compute[188703]: 2026-02-24 16:13:20.475 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:20 compute-0 nova_compute[188703]: 2026-02-24 16:13:20.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:13:20 compute-0 nova_compute[188703]: 2026-02-24 16:13:20.979 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:13:20 compute-0 nova_compute[188703]: 2026-02-24 16:13:20.980 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:13:20 compute-0 nova_compute[188703]: 2026-02-24 16:13:20.981 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:13:20 compute-0 nova_compute[188703]: 2026-02-24 16:13:20.982 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.083 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.166 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.168 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.232 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.599 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.601 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=72.12841033935547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.602 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.603 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.683 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.684 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.684 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.831 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.846 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.847 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:13:21 compute-0 nova_compute[188703]: 2026-02-24 16:13:21.848 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:13:23 compute-0 nova_compute[188703]: 2026-02-24 16:13:23.647 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:23 compute-0 podman[255841]: 2026-02-24 16:13:23.800766903 +0000 UTC m=+0.102984589 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:13:23 compute-0 podman[255840]: 2026-02-24 16:13:23.813581217 +0000 UTC m=+0.122359935 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:13:24 compute-0 ovn_controller[98701]: 2026-02-24T16:13:24Z|00170|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Feb 24 16:13:24 compute-0 sshd-session[255882]: Connection closed by authenticating user root 52.159.244.83 port 2072 [preauth]
Feb 24 16:13:25 compute-0 nova_compute[188703]: 2026-02-24 16:13:25.480 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:28 compute-0 podman[255885]: 2026-02-24 16:13:28.129233119 +0000 UTC m=+0.086428971 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Feb 24 16:13:28 compute-0 podman[255884]: 2026-02-24 16:13:28.155743882 +0000 UTC m=+0.111799272 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, architecture=x86_64, container_name=kepler)
Feb 24 16:13:28 compute-0 nova_compute[188703]: 2026-02-24 16:13:28.652 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:29 compute-0 podman[204685]: time="2026-02-24T16:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:13:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:13:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 24 16:13:30 compute-0 nova_compute[188703]: 2026-02-24 16:13:30.484 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:31 compute-0 openstack_network_exporter[207830]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:13:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:13:31 compute-0 openstack_network_exporter[207830]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:13:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:13:33 compute-0 nova_compute[188703]: 2026-02-24 16:13:33.653 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:34 compute-0 podman[255921]: 2026-02-24 16:13:34.126465281 +0000 UTC m=+0.084915678 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, version=9.7, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, io.openshift.expose-services=)
Feb 24 16:13:35 compute-0 nova_compute[188703]: 2026-02-24 16:13:35.488 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:38 compute-0 nova_compute[188703]: 2026-02-24 16:13:38.655 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.838 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.839 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.849 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124', 'name': 'te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.850 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T16:13:39.850751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.878 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/memory.usage volume: 47.38671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.880 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.880 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.882 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T16:13:39.881176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.901 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.902 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T16:13:39.904519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.909 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.911 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.912 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T16:13:39.911886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.913 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.915 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T16:13:39.914703) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.915 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.916 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.917 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T16:13:39.917308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.919 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.920 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.921 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T16:13:39.920025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.922 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.922 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T16:13:39.923643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.985 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 29330432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.986 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.987 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.988 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.989 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.990 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T16:13:39.988674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.991 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.992 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 1037277216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.992 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 103970129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T16:13:39.991563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.994 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.995 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/cpu volume: 216060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T16:13:39.995144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.997 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.998 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 1048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.998 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:39.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.000 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T16:13:39.997856) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.000 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.001 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T16:13:40.001570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.003 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.004 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T16:13:40.004455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.008 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.009 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T16:13:40.007850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.012 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 3781114837 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T16:13:40.011822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.013 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.014 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.015 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.015 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T16:13:40.015214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.018 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.018 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 316 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.019 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.020 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.021 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T16:13:40.018480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T16:13:40.020142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.022 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.023 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.024 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.025 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T16:13:40.021397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T16:13:40.022482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.026 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T16:13:40.023497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T16:13:40.024892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.026 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.029 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.029 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.029 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.029 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.030 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.030 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.030 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.030 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.030 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.030 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:13:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:13:40.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T16:13:40.026405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:13:40 compute-0 nova_compute[188703]: 2026-02-24 16:13:40.491 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:41 compute-0 podman[255941]: 2026-02-24 16:13:41.173803286 +0000 UTC m=+0.118241961 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 16:13:41 compute-0 podman[255942]: 2026-02-24 16:13:41.207372964 +0000 UTC m=+0.158104532 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 24 16:13:43 compute-0 nova_compute[188703]: 2026-02-24 16:13:43.658 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:45 compute-0 podman[255985]: 2026-02-24 16:13:45.154991031 +0000 UTC m=+0.100937052 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:13:45 compute-0 nova_compute[188703]: 2026-02-24 16:13:45.496 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:48 compute-0 nova_compute[188703]: 2026-02-24 16:13:48.662 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:50 compute-0 nova_compute[188703]: 2026-02-24 16:13:50.500 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:53 compute-0 nova_compute[188703]: 2026-02-24 16:13:53.665 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:54 compute-0 podman[256009]: 2026-02-24 16:13:54.138159243 +0000 UTC m=+0.091495891 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:13:54 compute-0 podman[256010]: 2026-02-24 16:13:54.157671533 +0000 UTC m=+0.102930247 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 24 16:13:55 compute-0 nova_compute[188703]: 2026-02-24 16:13:55.505 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:13:55.742 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:13:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:13:55.742 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:13:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:13:55.743 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:13:58 compute-0 nova_compute[188703]: 2026-02-24 16:13:58.668 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:13:59 compute-0 podman[256050]: 2026-02-24 16:13:59.138652489 +0000 UTC m=+0.098929876 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:13:59 compute-0 podman[256051]: 2026-02-24 16:13:59.140052847 +0000 UTC m=+0.100163909 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 16:13:59 compute-0 podman[204685]: time="2026-02-24T16:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:13:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:13:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Feb 24 16:14:00 compute-0 nova_compute[188703]: 2026-02-24 16:14:00.510 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:01 compute-0 openstack_network_exporter[207830]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:14:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:14:01 compute-0 openstack_network_exporter[207830]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:14:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:14:03 compute-0 nova_compute[188703]: 2026-02-24 16:14:03.676 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:05 compute-0 podman[256085]: 2026-02-24 16:14:05.166390294 +0000 UTC m=+0.129812299 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Feb 24 16:14:05 compute-0 nova_compute[188703]: 2026-02-24 16:14:05.513 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:08 compute-0 nova_compute[188703]: 2026-02-24 16:14:08.676 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:10 compute-0 nova_compute[188703]: 2026-02-24 16:14:10.517 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:10 compute-0 nova_compute[188703]: 2026-02-24 16:14:10.849 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:14:10 compute-0 nova_compute[188703]: 2026-02-24 16:14:10.849 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:14:10 compute-0 nova_compute[188703]: 2026-02-24 16:14:10.850 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:14:10 compute-0 nova_compute[188703]: 2026-02-24 16:14:10.850 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:14:12 compute-0 podman[256105]: 2026-02-24 16:14:12.156997819 +0000 UTC m=+0.100239152 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, config_id=ceilometer_agent_compute)
Feb 24 16:14:12 compute-0 podman[256106]: 2026-02-24 16:14:12.229551845 +0000 UTC m=+0.170301679 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 16:14:12 compute-0 nova_compute[188703]: 2026-02-24 16:14:12.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:14:12 compute-0 nova_compute[188703]: 2026-02-24 16:14:12.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:14:12 compute-0 nova_compute[188703]: 2026-02-24 16:14:12.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:14:13 compute-0 nova_compute[188703]: 2026-02-24 16:14:13.679 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:14 compute-0 nova_compute[188703]: 2026-02-24 16:14:14.084 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:14:14 compute-0 nova_compute[188703]: 2026-02-24 16:14:14.085 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:14:14 compute-0 nova_compute[188703]: 2026-02-24 16:14:14.085 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:14:14 compute-0 nova_compute[188703]: 2026-02-24 16:14:14.086 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:14:15 compute-0 nova_compute[188703]: 2026-02-24 16:14:15.520 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:16 compute-0 podman[256148]: 2026-02-24 16:14:16.180785093 +0000 UTC m=+0.132306110 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:14:17 compute-0 nova_compute[188703]: 2026-02-24 16:14:17.298 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:14:17 compute-0 nova_compute[188703]: 2026-02-24 16:14:17.324 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:14:17 compute-0 nova_compute[188703]: 2026-02-24 16:14:17.325 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:14:17 compute-0 nova_compute[188703]: 2026-02-24 16:14:17.326 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:14:17 compute-0 nova_compute[188703]: 2026-02-24 16:14:17.327 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:14:17 compute-0 nova_compute[188703]: 2026-02-24 16:14:17.328 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:14:18 compute-0 nova_compute[188703]: 2026-02-24 16:14:18.322 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:14:18 compute-0 nova_compute[188703]: 2026-02-24 16:14:18.686 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:19 compute-0 sshd-session[256174]: Invalid user admin from 185.156.73.233 port 59248
Feb 24 16:14:19 compute-0 sshd-session[256174]: Connection closed by invalid user admin 185.156.73.233 port 59248 [preauth]
Feb 24 16:14:20 compute-0 nova_compute[188703]: 2026-02-24 16:14:20.525 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:21 compute-0 nova_compute[188703]: 2026-02-24 16:14:21.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:14:21 compute-0 nova_compute[188703]: 2026-02-24 16:14:21.982 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:21 compute-0 nova_compute[188703]: 2026-02-24 16:14:21.983 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:21 compute-0 nova_compute[188703]: 2026-02-24 16:14:21.984 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:21 compute-0 nova_compute[188703]: 2026-02-24 16:14:21.985 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.084 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.167 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.168 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.226 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.534 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.536 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5095MB free_disk=72.12852096557617GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.536 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.537 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.665 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.666 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.667 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.791 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.809 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.812 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:14:22 compute-0 nova_compute[188703]: 2026-02-24 16:14:22.813 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:23 compute-0 nova_compute[188703]: 2026-02-24 16:14:23.685 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:25 compute-0 podman[256183]: 2026-02-24 16:14:25.119921718 +0000 UTC m=+0.072888699 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:14:25 compute-0 podman[256184]: 2026-02-24 16:14:25.157286412 +0000 UTC m=+0.109071220 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:14:25 compute-0 nova_compute[188703]: 2026-02-24 16:14:25.528 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:28 compute-0 nova_compute[188703]: 2026-02-24 16:14:28.686 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:29 compute-0 podman[204685]: time="2026-02-24T16:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:14:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:14:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Feb 24 16:14:30 compute-0 podman[256222]: 2026-02-24 16:14:30.135731061 +0000 UTC m=+0.092715226 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release-0.7.12=, version=9.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 16:14:30 compute-0 podman[256223]: 2026-02-24 16:14:30.143167473 +0000 UTC m=+0.098613446 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 24 16:14:30 compute-0 nova_compute[188703]: 2026-02-24 16:14:30.531 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:31 compute-0 openstack_network_exporter[207830]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:14:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:14:31 compute-0 openstack_network_exporter[207830]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:14:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:14:33 compute-0 nova_compute[188703]: 2026-02-24 16:14:33.689 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:35 compute-0 nova_compute[188703]: 2026-02-24 16:14:35.534 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:36 compute-0 podman[256258]: 2026-02-24 16:14:36.144727456 +0000 UTC m=+0.102678646 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.578 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.580 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.612 188707 DEBUG nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.691 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.705 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.706 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.717 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.719 188707 INFO nova.compute.claims [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Claim successful on node compute-0.ctlplane.example.com
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.852 188707 DEBUG nova.compute.provider_tree [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.868 188707 DEBUG nova.scheduler.client.report [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.895 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.896 188707 DEBUG nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.942 188707 DEBUG nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.943 188707 DEBUG nova.network.neutron [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.959 188707 INFO nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 24 16:14:38 compute-0 nova_compute[188703]: 2026-02-24 16:14:38.979 188707 DEBUG nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.067 188707 DEBUG nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.070 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.071 188707 INFO nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Creating image(s)
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.073 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "/var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.074 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "/var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.075 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "/var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.103 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.124 188707 DEBUG nova.policy [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95c31253f307489ba7dfda7d2823f04a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.183 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.185 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "bda4b94876a964317b1f9cfba4b35250036d1777" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.186 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "bda4b94876a964317b1f9cfba4b35250036d1777" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.213 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.315 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.316 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777,backing_fmt=raw /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.369 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777,backing_fmt=raw /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.370 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "bda4b94876a964317b1f9cfba4b35250036d1777" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.371 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.433 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.435 188707 DEBUG nova.virt.disk.api [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Checking if we can resize image /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:180
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.435 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.483 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.484 188707 DEBUG nova.virt.disk.api [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Cannot resize image /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:186
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.485 188707 DEBUG nova.objects.instance [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lazy-loading 'migration_context' on Instance uuid 25045b6d-8da1-4e43-b027-bab77ff8a2c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.513 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.514 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Ensure instance console log exists: /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.515 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.515 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:39 compute-0 nova_compute[188703]: 2026-02-24 16:14:39.516 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:40 compute-0 nova_compute[188703]: 2026-02-24 16:14:40.089 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:40.088 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:14:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:40.091 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:14:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:40.091 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:14:40 compute-0 nova_compute[188703]: 2026-02-24 16:14:40.221 188707 DEBUG nova.network.neutron [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Successfully created port: 8140cef6-d8b1-4098-8470-3077a2c6668d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 24 16:14:40 compute-0 nova_compute[188703]: 2026-02-24 16:14:40.537 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:41 compute-0 nova_compute[188703]: 2026-02-24 16:14:41.321 188707 DEBUG nova.network.neutron [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Successfully updated port: 8140cef6-d8b1-4098-8470-3077a2c6668d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 24 16:14:41 compute-0 nova_compute[188703]: 2026-02-24 16:14:41.340 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:14:41 compute-0 nova_compute[188703]: 2026-02-24 16:14:41.341 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquired lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:14:41 compute-0 nova_compute[188703]: 2026-02-24 16:14:41.342 188707 DEBUG nova.network.neutron [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 24 16:14:41 compute-0 nova_compute[188703]: 2026-02-24 16:14:41.401 188707 DEBUG nova.compute.manager [req-efeed03c-3581-4e7b-8445-ec7f2fcdcc77 req-18aac045-233a-46fd-9de4-54672d03d878 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Received event network-changed-8140cef6-d8b1-4098-8470-3077a2c6668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:14:41 compute-0 nova_compute[188703]: 2026-02-24 16:14:41.402 188707 DEBUG nova.compute.manager [req-efeed03c-3581-4e7b-8445-ec7f2fcdcc77 req-18aac045-233a-46fd-9de4-54672d03d878 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Refreshing instance network info cache due to event network-changed-8140cef6-d8b1-4098-8470-3077a2c6668d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 24 16:14:41 compute-0 nova_compute[188703]: 2026-02-24 16:14:41.403 188707 DEBUG oslo_concurrency.lockutils [req-efeed03c-3581-4e7b-8445-ec7f2fcdcc77 req-18aac045-233a-46fd-9de4-54672d03d878 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:14:41 compute-0 nova_compute[188703]: 2026-02-24 16:14:41.486 188707 DEBUG nova.network.neutron [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.614 188707 DEBUG nova.network.neutron [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updating instance_info_cache with network_info: [{"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.649 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Releasing lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.650 188707 DEBUG nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Instance network_info: |[{"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.652 188707 DEBUG oslo_concurrency.lockutils [req-efeed03c-3581-4e7b-8445-ec7f2fcdcc77 req-18aac045-233a-46fd-9de4-54672d03d878 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquired lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.653 188707 DEBUG nova.network.neutron [req-efeed03c-3581-4e7b-8445-ec7f2fcdcc77 req-18aac045-233a-46fd-9de4-54672d03d878 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Refreshing network info cache for port 8140cef6-d8b1-4098-8470-3077a2c6668d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.658 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Start _get_guest_xml network_info=[{"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:09:43Z,direct_url=<?>,disk_format='qcow2',id=c4831085-6e4d-4710-9d1c-263fd9bf6235,min_disk=0,min_ram=0,name='tempest-scenario-img--996897372',owner='95c31253f307489ba7dfda7d2823f04a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:09:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'image_id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.671 188707 WARNING nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.688 188707 DEBUG nova.virt.libvirt.host [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.689 188707 DEBUG nova.virt.libvirt.host [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.697 188707 DEBUG nova.virt.libvirt.host [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.698 188707 DEBUG nova.virt.libvirt.host [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.699 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.700 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-24T16:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3303ac8b-27ad-4047-abf8-38e38cd23b6f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-24T16:09:43Z,direct_url=<?>,disk_format='qcow2',id=c4831085-6e4d-4710-9d1c-263fd9bf6235,min_disk=0,min_ram=0,name='tempest-scenario-img--996897372',owner='95c31253f307489ba7dfda7d2823f04a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-24T16:09:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.702 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.703 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.704 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.705 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.706 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.707 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.708 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.709 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.710 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.711 188707 DEBUG nova.virt.hardware [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.717 188707 DEBUG nova.virt.libvirt.vif [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:14:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec',id=15,image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='677c1c47-5c86-4e10-835b-809c15045b3b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95c31253f307489ba7dfda7d2823f04a',ramdisk_id='',reservation_id='r-sonqcq10',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1117509900',owner_user_name='tempest-PrometheusGabbiTest-1117509900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:14:39Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='69d3eddd2a7d49bf9a69e0ccbb00f957',uuid=25045b6d-8da1-4e43-b027-bab77ff8a2c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.718 188707 DEBUG nova.network.os_vif_util [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converting VIF {"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.719 188707 DEBUG nova.network.os_vif_util [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9e:74,bridge_name='br-int',has_traffic_filtering=True,id=8140cef6-d8b1-4098-8470-3077a2c6668d,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8140cef6-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.720 188707 DEBUG nova.objects.instance [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lazy-loading 'pci_devices' on Instance uuid 25045b6d-8da1-4e43-b027-bab77ff8a2c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.739 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] End _get_guest_xml xml=<domain type="kvm">
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <uuid>25045b6d-8da1-4e43-b027-bab77ff8a2c1</uuid>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <name>instance-0000000f</name>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <memory>131072</memory>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <vcpu>1</vcpu>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <metadata>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <nova:name>te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec</nova:name>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <nova:creationTime>2026-02-24 16:14:42</nova:creationTime>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <nova:flavor name="m1.nano">
Feb 24 16:14:42 compute-0 nova_compute[188703]:         <nova:memory>128</nova:memory>
Feb 24 16:14:42 compute-0 nova_compute[188703]:         <nova:disk>1</nova:disk>
Feb 24 16:14:42 compute-0 nova_compute[188703]:         <nova:swap>0</nova:swap>
Feb 24 16:14:42 compute-0 nova_compute[188703]:         <nova:ephemeral>0</nova:ephemeral>
Feb 24 16:14:42 compute-0 nova_compute[188703]:         <nova:vcpus>1</nova:vcpus>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       </nova:flavor>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <nova:owner>
Feb 24 16:14:42 compute-0 nova_compute[188703]:         <nova:user uuid="69d3eddd2a7d49bf9a69e0ccbb00f957">tempest-PrometheusGabbiTest-1117509900-project-member</nova:user>
Feb 24 16:14:42 compute-0 nova_compute[188703]:         <nova:project uuid="95c31253f307489ba7dfda7d2823f04a">tempest-PrometheusGabbiTest-1117509900</nova:project>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       </nova:owner>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <nova:root type="image" uuid="c4831085-6e4d-4710-9d1c-263fd9bf6235"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <nova:ports>
Feb 24 16:14:42 compute-0 nova_compute[188703]:         <nova:port uuid="8140cef6-d8b1-4098-8470-3077a2c6668d">
Feb 24 16:14:42 compute-0 nova_compute[188703]:           <nova:ip type="fixed" address="10.100.0.204" ipVersion="4"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:         </nova:port>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       </nova:ports>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     </nova:instance>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   </metadata>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <sysinfo type="smbios">
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <system>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <entry name="manufacturer">RDO</entry>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <entry name="product">OpenStack Compute</entry>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <entry name="serial">25045b6d-8da1-4e43-b027-bab77ff8a2c1</entry>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <entry name="uuid">25045b6d-8da1-4e43-b027-bab77ff8a2c1</entry>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <entry name="family">Virtual Machine</entry>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     </system>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   </sysinfo>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <os>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <boot dev="hd"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <smbios mode="sysinfo"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   </os>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <features>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <acpi/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <apic/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <vmcoreinfo/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   </features>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <clock offset="utc">
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <timer name="pit" tickpolicy="delay"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <timer name="hpet" present="no"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   </clock>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <cpu mode="host-model" match="exact">
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <topology sockets="1" cores="1" threads="1"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   </cpu>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   <devices>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <disk type="file" device="disk">
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <driver name="qemu" type="qcow2" cache="none"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <target dev="vda" bus="virtio"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <disk type="file" device="cdrom">
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <driver name="qemu" type="raw" cache="none"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <source file="/var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.config"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <target dev="sda" bus="sata"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     </disk>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <interface type="ethernet">
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <mac address="fa:16:3e:b7:9e:74"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <driver name="vhost" rx_queue_size="512"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <mtu size="1442"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <target dev="tap8140cef6-d8"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     </interface>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <serial type="pty">
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <log file="/var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/console.log" append="off"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     </serial>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <video>
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <model type="virtio"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     </video>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <input type="tablet" bus="usb"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <rng model="virtio">
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <backend model="random">/dev/urandom</backend>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     </rng>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="pci" model="pcie-root-port"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <controller type="usb" index="0"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     <memballoon model="virtio">
Feb 24 16:14:42 compute-0 nova_compute[188703]:       <stats period="10"/>
Feb 24 16:14:42 compute-0 nova_compute[188703]:     </memballoon>
Feb 24 16:14:42 compute-0 nova_compute[188703]:   </devices>
Feb 24 16:14:42 compute-0 nova_compute[188703]: </domain>
Feb 24 16:14:42 compute-0 nova_compute[188703]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.748 188707 DEBUG nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Preparing to wait for external event network-vif-plugged-8140cef6-d8b1-4098-8470-3077a2c6668d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.749 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.749 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.749 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.750 188707 DEBUG nova.virt.libvirt.vif [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-24T16:14:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec',id=15,image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='677c1c47-5c86-4e10-835b-809c15045b3b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95c31253f307489ba7dfda7d2823f04a',ramdisk_id='',reservation_id='r-sonqcq10',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1117509900',owner_user_name='tempest-PrometheusGabbiTest-1117509900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-24T16:14:39Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='69d3eddd2a7d49bf9a69e0ccbb00f957',uuid=25045b6d-8da1-4e43-b027-bab77ff8a2c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.750 188707 DEBUG nova.network.os_vif_util [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converting VIF {"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.751 188707 DEBUG nova.network.os_vif_util [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9e:74,bridge_name='br-int',has_traffic_filtering=True,id=8140cef6-d8b1-4098-8470-3077a2c6668d,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8140cef6-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.751 188707 DEBUG os_vif [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9e:74,bridge_name='br-int',has_traffic_filtering=True,id=8140cef6-d8b1-4098-8470-3077a2c6668d,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8140cef6-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.751 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.753 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.754 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.758 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.759 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8140cef6-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.759 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8140cef6-d8, col_values=(('external_ids', {'iface-id': '8140cef6-d8b1-4098-8470-3077a2c6668d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:9e:74', 'vm-uuid': '25045b6d-8da1-4e43-b027-bab77ff8a2c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.761 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:42 compute-0 NetworkManager[56995]: <info>  [1771949682.7632] manager: (tap8140cef6-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.765 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.773 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.775 188707 INFO os_vif [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9e:74,bridge_name='br-int',has_traffic_filtering=True,id=8140cef6-d8b1-4098-8470-3077a2c6668d,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8140cef6-d8')
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.825 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.826 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.827 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] No VIF found with MAC fa:16:3e:b7:9e:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 24 16:14:42 compute-0 nova_compute[188703]: 2026-02-24 16:14:42.828 188707 INFO nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Using config drive
Feb 24 16:14:43 compute-0 podman[256297]: 2026-02-24 16:14:43.116856777 +0000 UTC m=+0.078661405 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 16:14:43 compute-0 podman[256298]: 2026-02-24 16:14:43.171349425 +0000 UTC m=+0.125195097 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.227 188707 INFO nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Creating config drive at /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.config
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.233 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqqhijb_v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.359 188707 DEBUG oslo_concurrency.processutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqqhijb_v" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:14:43 compute-0 kernel: tap8140cef6-d8: entered promiscuous mode
Feb 24 16:14:43 compute-0 ovn_controller[98701]: 2026-02-24T16:14:43Z|00171|binding|INFO|Claiming lport 8140cef6-d8b1-4098-8470-3077a2c6668d for this chassis.
Feb 24 16:14:43 compute-0 ovn_controller[98701]: 2026-02-24T16:14:43Z|00172|binding|INFO|8140cef6-d8b1-4098-8470-3077a2c6668d: Claiming fa:16:3e:b7:9e:74 10.100.0.204
Feb 24 16:14:43 compute-0 NetworkManager[56995]: <info>  [1771949683.4271] manager: (tap8140cef6-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.430 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.429 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:9e:74 10.100.0.204'], port_security=['fa:16:3e:b7:9e:74 10.100.0.204'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.204/16', 'neutron:device_id': '25045b6d-8da1-4e43-b027-bab77ff8a2c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9b818-e146-43d5-9aff-1f87311842d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95c31253f307489ba7dfda7d2823f04a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c332945-b8d3-49ba-8675-a4bd059f5256', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dea1c3bb-7b9c-4930-b640-f5e21cc78102, chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=8140cef6-d8b1-4098-8470-3077a2c6668d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.431 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 8140cef6-d8b1-4098-8470-3077a2c6668d in datapath 7ba9b818-e146-43d5-9aff-1f87311842d0 bound to our chassis
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.433 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9b818-e146-43d5-9aff-1f87311842d0
Feb 24 16:14:43 compute-0 ovn_controller[98701]: 2026-02-24T16:14:43Z|00173|binding|INFO|Setting lport 8140cef6-d8b1-4098-8470-3077a2c6668d ovn-installed in OVS
Feb 24 16:14:43 compute-0 ovn_controller[98701]: 2026-02-24T16:14:43Z|00174|binding|INFO|Setting lport 8140cef6-d8b1-4098-8470-3077a2c6668d up in Southbound
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.450 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[76336f21-ca40-4a97-a186-a7675f827e6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.454 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.456 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:43 compute-0 systemd-machined[158049]: New machine qemu-16-instance-0000000f.
Feb 24 16:14:43 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.477 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[78b6766c-6849-4a6d-9976-5f25e4af5617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:14:43 compute-0 systemd-udevd[256361]: Network interface NamePolicy= disabled on kernel command line.
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.484 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f6df65-0c12-4428-a9a3-b5a41253f694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:14:43 compute-0 NetworkManager[56995]: <info>  [1771949683.4953] device (tap8140cef6-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 24 16:14:43 compute-0 NetworkManager[56995]: <info>  [1771949683.4962] device (tap8140cef6-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.510 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[76353eee-1104-44f9-bc42-af7d2044fc09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.524 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f8c313-089c-42df-aa03-a82e0ab4c729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9b818-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:80:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511234, 'reachable_time': 16121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256366, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.535 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[29d52ce8-4173-44f8-990d-06c939552dc1]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap7ba9b818-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511245, 'tstamp': 511245}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256371, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9b818-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511248, 'tstamp': 511248}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256371, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.537 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9b818-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.539 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.541 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.543 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9b818-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.543 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.544 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9b818-e0, col_values=(('external_ids', {'iface-id': '0f982f60-a551-4bd9-8329-8decd220388f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:14:43 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:43.544 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.672 188707 DEBUG nova.compute.manager [req-6ea2691b-a10e-431a-bd18-4347b5f2cae5 req-286265f4-a529-47a0-b4d5-a7e47b777197 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Received event network-vif-plugged-8140cef6-d8b1-4098-8470-3077a2c6668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.673 188707 DEBUG oslo_concurrency.lockutils [req-6ea2691b-a10e-431a-bd18-4347b5f2cae5 req-286265f4-a529-47a0-b4d5-a7e47b777197 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.674 188707 DEBUG oslo_concurrency.lockutils [req-6ea2691b-a10e-431a-bd18-4347b5f2cae5 req-286265f4-a529-47a0-b4d5-a7e47b777197 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.675 188707 DEBUG oslo_concurrency.lockutils [req-6ea2691b-a10e-431a-bd18-4347b5f2cae5 req-286265f4-a529-47a0-b4d5-a7e47b777197 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.676 188707 DEBUG nova.compute.manager [req-6ea2691b-a10e-431a-bd18-4347b5f2cae5 req-286265f4-a529-47a0-b4d5-a7e47b777197 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Processing event network-vif-plugged-8140cef6-d8b1-4098-8470-3077a2c6668d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.693 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.800 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949683.799888, 25045b6d-8da1-4e43-b027-bab77ff8a2c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.801 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] VM Started (Lifecycle Event)
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.803 188707 DEBUG nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.807 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.811 188707 INFO nova.virt.libvirt.driver [-] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Instance spawned successfully.
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.812 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.819 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.824 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.833 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.834 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.836 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.837 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.838 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.838 188707 DEBUG nova.virt.libvirt.driver [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.847 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.848 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949683.8000076, 25045b6d-8da1-4e43-b027-bab77ff8a2c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.848 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] VM Paused (Lifecycle Event)
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.868 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.873 188707 DEBUG nova.virt.driver [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] Emitting event <LifecycleEvent: 1771949683.8063858, 25045b6d-8da1-4e43-b027-bab77ff8a2c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.874 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] VM Resumed (Lifecycle Event)
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.889 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.900 188707 INFO nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Took 4.83 seconds to spawn the instance on the hypervisor.
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.900 188707 DEBUG nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.904 188707 DEBUG nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.936 188707 INFO nova.compute.manager [None req-f96e342e-1826-4655-beb8-3631792831c0 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.962 188707 INFO nova.compute.manager [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Took 5.30 seconds to build instance.
Feb 24 16:14:43 compute-0 nova_compute[188703]: 2026-02-24 16:14:43.977 188707 DEBUG oslo_concurrency.lockutils [None req-57cc20b7-25ae-42e1-a986-ad986a5a031f 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:44 compute-0 nova_compute[188703]: 2026-02-24 16:14:44.280 188707 DEBUG nova.network.neutron [req-efeed03c-3581-4e7b-8445-ec7f2fcdcc77 req-18aac045-233a-46fd-9de4-54672d03d878 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updated VIF entry in instance network info cache for port 8140cef6-d8b1-4098-8470-3077a2c6668d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 24 16:14:44 compute-0 nova_compute[188703]: 2026-02-24 16:14:44.280 188707 DEBUG nova.network.neutron [req-efeed03c-3581-4e7b-8445-ec7f2fcdcc77 req-18aac045-233a-46fd-9de4-54672d03d878 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updating instance_info_cache with network_info: [{"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:14:44 compute-0 nova_compute[188703]: 2026-02-24 16:14:44.298 188707 DEBUG oslo_concurrency.lockutils [req-efeed03c-3581-4e7b-8445-ec7f2fcdcc77 req-18aac045-233a-46fd-9de4-54672d03d878 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Releasing lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:14:45 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 24 16:14:45 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 24 16:14:45 compute-0 nova_compute[188703]: 2026-02-24 16:14:45.738 188707 DEBUG nova.compute.manager [req-8e6c31e3-a3e2-4c0c-99ac-52c49f7f8ce3 req-946360e1-27f2-4024-9161-7ead8ea6997d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Received event network-vif-plugged-8140cef6-d8b1-4098-8470-3077a2c6668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:14:45 compute-0 nova_compute[188703]: 2026-02-24 16:14:45.739 188707 DEBUG oslo_concurrency.lockutils [req-8e6c31e3-a3e2-4c0c-99ac-52c49f7f8ce3 req-946360e1-27f2-4024-9161-7ead8ea6997d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:45 compute-0 nova_compute[188703]: 2026-02-24 16:14:45.740 188707 DEBUG oslo_concurrency.lockutils [req-8e6c31e3-a3e2-4c0c-99ac-52c49f7f8ce3 req-946360e1-27f2-4024-9161-7ead8ea6997d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:45 compute-0 nova_compute[188703]: 2026-02-24 16:14:45.740 188707 DEBUG oslo_concurrency.lockutils [req-8e6c31e3-a3e2-4c0c-99ac-52c49f7f8ce3 req-946360e1-27f2-4024-9161-7ead8ea6997d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:45 compute-0 nova_compute[188703]: 2026-02-24 16:14:45.741 188707 DEBUG nova.compute.manager [req-8e6c31e3-a3e2-4c0c-99ac-52c49f7f8ce3 req-946360e1-27f2-4024-9161-7ead8ea6997d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] No waiting events found dispatching network-vif-plugged-8140cef6-d8b1-4098-8470-3077a2c6668d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:14:45 compute-0 nova_compute[188703]: 2026-02-24 16:14:45.741 188707 WARNING nova.compute.manager [req-8e6c31e3-a3e2-4c0c-99ac-52c49f7f8ce3 req-946360e1-27f2-4024-9161-7ead8ea6997d 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Received unexpected event network-vif-plugged-8140cef6-d8b1-4098-8470-3077a2c6668d for instance with vm_state active and task_state None.
Feb 24 16:14:47 compute-0 podman[256401]: 2026-02-24 16:14:47.110102051 +0000 UTC m=+0.068309664 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:14:47 compute-0 nova_compute[188703]: 2026-02-24 16:14:47.763 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:48 compute-0 nova_compute[188703]: 2026-02-24 16:14:48.694 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:52 compute-0 nova_compute[188703]: 2026-02-24 16:14:52.768 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:53 compute-0 nova_compute[188703]: 2026-02-24 16:14:53.696 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:55.743 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:14:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:55.744 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:14:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:14:55.744 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:14:56 compute-0 podman[256426]: 2026-02-24 16:14:56.129380545 +0000 UTC m=+0.078331085 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 24 16:14:56 compute-0 podman[256425]: 2026-02-24 16:14:56.142742897 +0000 UTC m=+0.099240033 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:14:57 compute-0 nova_compute[188703]: 2026-02-24 16:14:57.773 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:58 compute-0 nova_compute[188703]: 2026-02-24 16:14:58.699 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:14:59 compute-0 podman[204685]: time="2026-02-24T16:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:14:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:14:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Feb 24 16:15:01 compute-0 sshd-session[256469]: error: kex_exchange_identification: read: Connection reset by peer
Feb 24 16:15:01 compute-0 sshd-session[256469]: Connection reset by 176.120.22.52 port 10294
Feb 24 16:15:01 compute-0 podman[256471]: 2026-02-24 16:15:01.140181432 +0000 UTC m=+0.094201676 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:15:01 compute-0 podman[256470]: 2026-02-24 16:15:01.145521797 +0000 UTC m=+0.104094854 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64)
Feb 24 16:15:01 compute-0 openstack_network_exporter[207830]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:15:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:15:01 compute-0 openstack_network_exporter[207830]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:15:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:15:02 compute-0 nova_compute[188703]: 2026-02-24 16:15:02.778 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:03 compute-0 nova_compute[188703]: 2026-02-24 16:15:03.700 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:07 compute-0 podman[256508]: 2026-02-24 16:15:07.144026827 +0000 UTC m=+0.097247368 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, io.openshift.expose-services=)
Feb 24 16:15:07 compute-0 nova_compute[188703]: 2026-02-24 16:15:07.781 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:08 compute-0 nova_compute[188703]: 2026-02-24 16:15:08.703 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:10 compute-0 nova_compute[188703]: 2026-02-24 16:15:10.814 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:10 compute-0 nova_compute[188703]: 2026-02-24 16:15:10.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:11 compute-0 nova_compute[188703]: 2026-02-24 16:15:11.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:11 compute-0 nova_compute[188703]: 2026-02-24 16:15:11.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:15:12 compute-0 nova_compute[188703]: 2026-02-24 16:15:12.785 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:12 compute-0 nova_compute[188703]: 2026-02-24 16:15:12.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:12 compute-0 nova_compute[188703]: 2026-02-24 16:15:12.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:15:12 compute-0 nova_compute[188703]: 2026-02-24 16:15:12.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:15:13 compute-0 ovn_controller[98701]: 2026-02-24T16:15:13Z|00175|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Feb 24 16:15:13 compute-0 nova_compute[188703]: 2026-02-24 16:15:13.705 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:14 compute-0 nova_compute[188703]: 2026-02-24 16:15:14.099 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:15:14 compute-0 nova_compute[188703]: 2026-02-24 16:15:14.100 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:15:14 compute-0 nova_compute[188703]: 2026-02-24 16:15:14.100 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:15:14 compute-0 nova_compute[188703]: 2026-02-24 16:15:14.100 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:15:14 compute-0 podman[256528]: 2026-02-24 16:15:14.128008792 +0000 UTC m=+0.090739143 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:15:14 compute-0 podman[256529]: 2026-02-24 16:15:14.191875314 +0000 UTC m=+0.153300300 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:15:17 compute-0 nova_compute[188703]: 2026-02-24 16:15:17.406 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:15:17 compute-0 nova_compute[188703]: 2026-02-24 16:15:17.429 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:15:17 compute-0 nova_compute[188703]: 2026-02-24 16:15:17.430 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:15:17 compute-0 nova_compute[188703]: 2026-02-24 16:15:17.430 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:17 compute-0 nova_compute[188703]: 2026-02-24 16:15:17.431 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:17 compute-0 nova_compute[188703]: 2026-02-24 16:15:17.431 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:17 compute-0 ovn_controller[98701]: 2026-02-24T16:15:17Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:9e:74 10.100.0.204
Feb 24 16:15:17 compute-0 ovn_controller[98701]: 2026-02-24T16:15:17Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:9e:74 10.100.0.204
Feb 24 16:15:17 compute-0 nova_compute[188703]: 2026-02-24 16:15:17.790 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:18 compute-0 podman[256582]: 2026-02-24 16:15:18.207021493 +0000 UTC m=+0.157180075 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:15:18 compute-0 nova_compute[188703]: 2026-02-24 16:15:18.708 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:20 compute-0 nova_compute[188703]: 2026-02-24 16:15:20.426 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:20 compute-0 nova_compute[188703]: 2026-02-24 16:15:20.426 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:22 compute-0 nova_compute[188703]: 2026-02-24 16:15:22.794 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:22 compute-0 nova_compute[188703]: 2026-02-24 16:15:22.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:15:22 compute-0 nova_compute[188703]: 2026-02-24 16:15:22.975 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:15:22 compute-0 nova_compute[188703]: 2026-02-24 16:15:22.976 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:15:22 compute-0 nova_compute[188703]: 2026-02-24 16:15:22.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:15:22 compute-0 nova_compute[188703]: 2026-02-24 16:15:22.977 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.066 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.142 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.143 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.202 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.210 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.291 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.293 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.375 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.712 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.811 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.813 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4933MB free_disk=72.09954833984375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.814 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.815 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.911 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.912 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.912 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.913 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:15:23 compute-0 nova_compute[188703]: 2026-02-24 16:15:23.981 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:15:24 compute-0 nova_compute[188703]: 2026-02-24 16:15:24.002 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:15:24 compute-0 nova_compute[188703]: 2026-02-24 16:15:24.039 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:15:24 compute-0 nova_compute[188703]: 2026-02-24 16:15:24.041 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:15:27 compute-0 podman[256621]: 2026-02-24 16:15:27.12876292 +0000 UTC m=+0.086044015 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:15:27 compute-0 podman[256620]: 2026-02-24 16:15:27.134064374 +0000 UTC m=+0.093491857 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:15:27 compute-0 nova_compute[188703]: 2026-02-24 16:15:27.798 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:28 compute-0 nova_compute[188703]: 2026-02-24 16:15:28.715 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:29 compute-0 podman[204685]: time="2026-02-24T16:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:15:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:15:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Feb 24 16:15:31 compute-0 openstack_network_exporter[207830]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:15:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:15:31 compute-0 openstack_network_exporter[207830]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:15:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:15:32 compute-0 podman[256662]: 2026-02-24 16:15:32.130733778 +0000 UTC m=+0.094874164 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=kepler, release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 16:15:32 compute-0 podman[256663]: 2026-02-24 16:15:32.139399373 +0000 UTC m=+0.096836898 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 16:15:32 compute-0 nova_compute[188703]: 2026-02-24 16:15:32.801 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:33 compute-0 nova_compute[188703]: 2026-02-24 16:15:33.717 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:37 compute-0 nova_compute[188703]: 2026-02-24 16:15:37.806 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:38 compute-0 podman[256702]: 2026-02-24 16:15:38.13592913 +0000 UTC m=+0.098249536 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, config_id=openstack_network_exporter, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, distribution-scope=public, vcs-type=git, version=9.7)
Feb 24 16:15:38 compute-0 nova_compute[188703]: 2026-02-24 16:15:38.721 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.839 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.840 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.855 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.859 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/25045b6d-8da1-4e43-b027-bab77ff8a2c1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}73b0c13b5a4a5040b844caf061f86a047525470480760071a896533737f49d3e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 24 16:15:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:39.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.137 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Tue, 24 Feb 2026 16:15:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-cb9bbd39-9693-4174-87fb-24141df71e2c x-openstack-request-id: req-cb9bbd39-9693-4174-87fb-24141df71e2c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.137 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "25045b6d-8da1-4e43-b027-bab77ff8a2c1", "name": "te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec", "status": "ACTIVE", "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "user_id": "69d3eddd2a7d49bf9a69e0ccbb00f957", "metadata": {"metering.server_group": "677c1c47-5c86-4e10-835b-809c15045b3b"}, "hostId": "d3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b", "image": {"id": "c4831085-6e4d-4710-9d1c-263fd9bf6235", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/c4831085-6e4d-4710-9d1c-263fd9bf6235"}]}, "flavor": {"id": "3303ac8b-27ad-4047-abf8-38e38cd23b6f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/3303ac8b-27ad-4047-abf8-38e38cd23b6f"}]}, "created": "2026-02-24T16:14:37Z", "updated": "2026-02-24T16:14:43Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.204", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b7:9e:74"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/25045b6d-8da1-4e43-b027-bab77ff8a2c1"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/25045b6d-8da1-4e43-b027-bab77ff8a2c1"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-24T16:14:43.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.137 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/25045b6d-8da1-4e43-b027-bab77ff8a2c1 used request id req-cb9bbd39-9693-4174-87fb-24141df71e2c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.139 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '25045b6d-8da1-4e43-b027-bab77ff8a2c1', 'name': 'te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.144 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124', 'name': 'te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.145 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.145 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T16:15:42.145746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.180 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/memory.usage volume: 43.48046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.213 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/memory.usage volume: 47.05078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.214 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.214 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.214 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.215 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.215 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.215 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T16:15:42.215451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.235 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.236 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.255 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.255 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.256 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T16:15:42.257442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.262 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 25045b6d-8da1-4e43-b027-bab77ff8a2c1 / tap8140cef6-d8 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.262 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.268 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.269 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.270 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes volume: 1682 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.270 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T16:15:42.270214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.271 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.272 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.273 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.273 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.275 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.275 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.276 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.277 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.277 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.277 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.277 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.278 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.278 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.279 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.279 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T16:15:42.273219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T16:15:42.275724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T16:15:42.278148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.280 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.280 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.281 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.281 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.281 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T16:15:42.281419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.328 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 29162496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.329 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.384 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 30550528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.384 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.385 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.386 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.386 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T16:15:42.386355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.388 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.389 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 997011743 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.389 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 79286088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.389 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 1082606427 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T16:15:42.388772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.390 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 115477352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.391 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.392 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.392 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.392 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/cpu volume: 56760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.393 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/cpu volume: 330990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T16:15:42.392333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.394 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.395 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 1040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.395 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.395 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.396 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.397 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.398 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.398 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.398 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.399 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T16:15:42.394852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.399 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.399 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T16:15:42.398379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.400 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.401 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.401 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T16:15:42.400946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.403 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.403 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.404 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T16:15:42.403684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.405 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.405 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T16:15:42.407648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.408 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 3561839556 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.408 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.409 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 3786554455 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.409 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.411 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 72802304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.412 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.412 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 72966144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T16:15:42.411437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.413 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.414 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.414 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.415 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.415 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.415 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.415 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.416 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-24T16:15:42.415529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.415 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec>]
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.416 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.417 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.417 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.417 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.417 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 316 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.418 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.418 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.419 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.420 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.420 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.421 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.421 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.422 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T16:15:42.417521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T16:15:42.421492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.423 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.424 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.425 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes.delta volume: 336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T16:15:42.424362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T16:15:42.426878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.427 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T16:15:42.428129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.428 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.429 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.429 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec>]
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.429 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.429 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.430 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.430 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.431 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.431 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.431 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.431 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes volume: 1956 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.432 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-24T16:15:42.429191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T16:15:42.430018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.433 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T16:15:42.431461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.433 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.433 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.434 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.434 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.434 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.434 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.434 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.435 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.435 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.435 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.435 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.435 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.436 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.436 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:15:42.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:15:42 compute-0 nova_compute[188703]: 2026-02-24 16:15:42.810 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:43 compute-0 nova_compute[188703]: 2026-02-24 16:15:43.726 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:44 compute-0 podman[256725]: 2026-02-24 16:15:44.798993097 +0000 UTC m=+0.104952888 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 16:15:44 compute-0 podman[256726]: 2026-02-24 16:15:44.86469407 +0000 UTC m=+0.164469852 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:15:47 compute-0 nova_compute[188703]: 2026-02-24 16:15:47.816 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:48 compute-0 nova_compute[188703]: 2026-02-24 16:15:48.726 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:49 compute-0 podman[256771]: 2026-02-24 16:15:49.134605059 +0000 UTC m=+0.087147875 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:15:52 compute-0 nova_compute[188703]: 2026-02-24 16:15:52.821 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:53 compute-0 nova_compute[188703]: 2026-02-24 16:15:53.731 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:15:55.744 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:15:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:15:55.744 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:15:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:15:55.744 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:15:57 compute-0 nova_compute[188703]: 2026-02-24 16:15:57.827 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:58 compute-0 podman[256796]: 2026-02-24 16:15:58.101659938 +0000 UTC m=+0.065775546 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:15:58 compute-0 podman[256797]: 2026-02-24 16:15:58.120395656 +0000 UTC m=+0.071523481 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 16:15:58 compute-0 nova_compute[188703]: 2026-02-24 16:15:58.734 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:15:59 compute-0 podman[204685]: time="2026-02-24T16:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:15:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:15:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Feb 24 16:16:01 compute-0 openstack_network_exporter[207830]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:16:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:16:01 compute-0 openstack_network_exporter[207830]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:16:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:16:02 compute-0 nova_compute[188703]: 2026-02-24 16:16:02.831 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:03 compute-0 podman[256835]: 2026-02-24 16:16:03.136375224 +0000 UTC m=+0.089557741 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, release-0.7.12=, name=ubi9, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Feb 24 16:16:03 compute-0 podman[256836]: 2026-02-24 16:16:03.148706019 +0000 UTC m=+0.099097850 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Feb 24 16:16:03 compute-0 nova_compute[188703]: 2026-02-24 16:16:03.738 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:07 compute-0 nova_compute[188703]: 2026-02-24 16:16:07.834 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:08 compute-0 nova_compute[188703]: 2026-02-24 16:16:08.738 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:09 compute-0 podman[256875]: 2026-02-24 16:16:09.111369516 +0000 UTC m=+0.068550940 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, build-date=2026-02-05T04:57:10Z, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Feb 24 16:16:11 compute-0 nova_compute[188703]: 2026-02-24 16:16:11.042 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:11 compute-0 nova_compute[188703]: 2026-02-24 16:16:11.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:12 compute-0 nova_compute[188703]: 2026-02-24 16:16:12.837 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:13 compute-0 nova_compute[188703]: 2026-02-24 16:16:13.740 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:13 compute-0 nova_compute[188703]: 2026-02-24 16:16:13.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:13 compute-0 nova_compute[188703]: 2026-02-24 16:16:13.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:16:14 compute-0 nova_compute[188703]: 2026-02-24 16:16:14.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:14 compute-0 nova_compute[188703]: 2026-02-24 16:16:14.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:16:15 compute-0 podman[256895]: 2026-02-24 16:16:15.134176207 +0000 UTC m=+0.097813465 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:16:15 compute-0 podman[256896]: 2026-02-24 16:16:15.151588988 +0000 UTC m=+0.112338218 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 24 16:16:16 compute-0 nova_compute[188703]: 2026-02-24 16:16:16.110 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:16:16 compute-0 nova_compute[188703]: 2026-02-24 16:16:16.111 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:16:16 compute-0 nova_compute[188703]: 2026-02-24 16:16:16.112 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:16:17 compute-0 nova_compute[188703]: 2026-02-24 16:16:17.840 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:18 compute-0 nova_compute[188703]: 2026-02-24 16:16:18.743 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:19 compute-0 nova_compute[188703]: 2026-02-24 16:16:19.304 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updating instance_info_cache with network_info: [{"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:16:19 compute-0 nova_compute[188703]: 2026-02-24 16:16:19.331 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:16:19 compute-0 nova_compute[188703]: 2026-02-24 16:16:19.332 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:16:19 compute-0 nova_compute[188703]: 2026-02-24 16:16:19.333 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:19 compute-0 nova_compute[188703]: 2026-02-24 16:16:19.333 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:19 compute-0 nova_compute[188703]: 2026-02-24 16:16:19.333 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:20 compute-0 podman[256962]: 2026-02-24 16:16:20.112362008 +0000 UTC m=+0.069549187 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:16:21 compute-0 nova_compute[188703]: 2026-02-24 16:16:21.328 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:22 compute-0 nova_compute[188703]: 2026-02-24 16:16:22.845 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:23 compute-0 nova_compute[188703]: 2026-02-24 16:16:23.745 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:23 compute-0 nova_compute[188703]: 2026-02-24 16:16:23.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:23 compute-0 nova_compute[188703]: 2026-02-24 16:16:23.985 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:16:23 compute-0 nova_compute[188703]: 2026-02-24 16:16:23.986 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:16:23 compute-0 nova_compute[188703]: 2026-02-24 16:16:23.986 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:16:23 compute-0 nova_compute[188703]: 2026-02-24 16:16:23.987 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.085 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.142 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.144 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.225 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.235 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.303 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.306 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.357 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.707 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.708 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4891MB free_disk=72.09949111938477GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.709 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.709 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.909 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.910 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.911 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.912 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.927 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.946 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.947 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.964 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 16:16:24 compute-0 nova_compute[188703]: 2026-02-24 16:16:24.984 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 16:16:25 compute-0 nova_compute[188703]: 2026-02-24 16:16:25.052 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:16:25 compute-0 nova_compute[188703]: 2026-02-24 16:16:25.068 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:16:25 compute-0 nova_compute[188703]: 2026-02-24 16:16:25.071 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:16:25 compute-0 nova_compute[188703]: 2026-02-24 16:16:25.072 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:16:27 compute-0 nova_compute[188703]: 2026-02-24 16:16:27.849 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:28 compute-0 nova_compute[188703]: 2026-02-24 16:16:28.753 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:29 compute-0 podman[256999]: 2026-02-24 16:16:29.136561707 +0000 UTC m=+0.090031974 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:16:29 compute-0 podman[256998]: 2026-02-24 16:16:29.140593626 +0000 UTC m=+0.098195345 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 16:16:29 compute-0 podman[204685]: time="2026-02-24T16:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:16:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:16:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Feb 24 16:16:31 compute-0 openstack_network_exporter[207830]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:16:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:16:31 compute-0 openstack_network_exporter[207830]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:16:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:16:32 compute-0 nova_compute[188703]: 2026-02-24 16:16:32.854 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:33 compute-0 nova_compute[188703]: 2026-02-24 16:16:33.755 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:34 compute-0 podman[257039]: 2026-02-24 16:16:34.118602864 +0000 UTC m=+0.074801371 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, release-0.7.12=, version=9.4, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Feb 24 16:16:34 compute-0 podman[257040]: 2026-02-24 16:16:34.140565169 +0000 UTC m=+0.091050121 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 24 16:16:37 compute-0 nova_compute[188703]: 2026-02-24 16:16:37.857 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:37 compute-0 nova_compute[188703]: 2026-02-24 16:16:37.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:16:37 compute-0 nova_compute[188703]: 2026-02-24 16:16:37.943 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:16:37 compute-0 nova_compute[188703]: 2026-02-24 16:16:37.944 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:16:37 compute-0 nova_compute[188703]: 2026-02-24 16:16:37.945 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:16:37 compute-0 nova_compute[188703]: 2026-02-24 16:16:37.946 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:16:37 compute-0 nova_compute[188703]: 2026-02-24 16:16:37.947 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:16:37 compute-0 nova_compute[188703]: 2026-02-24 16:16:37.948 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:16:37 compute-0 nova_compute[188703]: 2026-02-24 16:16:37.979 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.031 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.031 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Image id c4831085-6e4d-4710-9d1c-263fd9bf6235 yields fingerprint bda4b94876a964317b1f9cfba4b35250036d1777 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.032 188707 INFO nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] image c4831085-6e4d-4710-9d1c-263fd9bf6235 at (/var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777): checking
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.032 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] image c4831085-6e4d-4710-9d1c-263fd9bf6235 at (/var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.034 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.035 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.036 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.036 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.123 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.125 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 is backed by bda4b94876a964317b1f9cfba4b35250036d1777 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.125 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] 25045b6d-8da1-4e43-b027-bab77ff8a2c1 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.126 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] 25045b6d-8da1-4e43-b027-bab77ff8a2c1 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.126 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.186 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.188 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 is backed by bda4b94876a964317b1f9cfba4b35250036d1777 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.188 188707 WARNING nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.188 188707 WARNING nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.189 188707 WARNING nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.189 188707 INFO nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Active base files: /var/lib/nova/instances/_base/bda4b94876a964317b1f9cfba4b35250036d1777
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.190 188707 INFO nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Removable base files: /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759 /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.190 188707 INFO nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b0be823aa24a489a3a4f58a9a60afb2758db2759
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.191 188707 INFO nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b0586a242fd806d9546514e047f78171947acd4f
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.191 188707 INFO nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c13b49024b5494b3a1c7152ba68db7875bd84683
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.192 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.192 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.192 188707 DEBUG nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.193 188707 INFO nova.virt.libvirt.imagecache [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Feb 24 16:16:38 compute-0 nova_compute[188703]: 2026-02-24 16:16:38.758 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:40 compute-0 podman[257083]: 2026-02-24 16:16:40.167001948 +0000 UTC m=+0.123014979 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, architecture=x86_64, release=1770267347, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 24 16:16:42 compute-0 nova_compute[188703]: 2026-02-24 16:16:42.860 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:43 compute-0 nova_compute[188703]: 2026-02-24 16:16:43.761 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:45 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 24 16:16:45 compute-0 podman[257104]: 2026-02-24 16:16:45.592890995 +0000 UTC m=+0.110730135 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute)
Feb 24 16:16:45 compute-0 podman[257105]: 2026-02-24 16:16:45.64397161 +0000 UTC m=+0.159958940 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:16:47 compute-0 nova_compute[188703]: 2026-02-24 16:16:47.863 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:48 compute-0 nova_compute[188703]: 2026-02-24 16:16:48.762 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:51 compute-0 podman[257151]: 2026-02-24 16:16:51.126407061 +0000 UTC m=+0.081150442 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:16:52 compute-0 nova_compute[188703]: 2026-02-24 16:16:52.866 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:53 compute-0 nova_compute[188703]: 2026-02-24 16:16:53.765 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:16:55.745 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:16:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:16:55.745 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:16:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:16:55.746 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:16:57 compute-0 nova_compute[188703]: 2026-02-24 16:16:57.871 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:58 compute-0 nova_compute[188703]: 2026-02-24 16:16:58.769 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:16:59 compute-0 podman[204685]: time="2026-02-24T16:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:16:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:16:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Feb 24 16:17:00 compute-0 podman[257175]: 2026-02-24 16:17:00.118821697 +0000 UTC m=+0.072549219 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:17:00 compute-0 podman[257176]: 2026-02-24 16:17:00.137905455 +0000 UTC m=+0.080366931 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:17:01 compute-0 openstack_network_exporter[207830]: ERROR   16:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:17:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:17:01 compute-0 openstack_network_exporter[207830]: ERROR   16:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:17:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:17:02 compute-0 nova_compute[188703]: 2026-02-24 16:17:02.874 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:03 compute-0 nova_compute[188703]: 2026-02-24 16:17:03.771 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:05 compute-0 podman[257219]: 2026-02-24 16:17:05.127961189 +0000 UTC m=+0.079845417 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260223)
Feb 24 16:17:05 compute-0 podman[257218]: 2026-02-24 16:17:05.13164813 +0000 UTC m=+0.089237882 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler)
Feb 24 16:17:07 compute-0 nova_compute[188703]: 2026-02-24 16:17:07.877 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:08 compute-0 nova_compute[188703]: 2026-02-24 16:17:08.774 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:11 compute-0 podman[257254]: 2026-02-24 16:17:11.132265006 +0000 UTC m=+0.091913733 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.7, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 24 16:17:11 compute-0 nova_compute[188703]: 2026-02-24 16:17:11.193 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:12 compute-0 nova_compute[188703]: 2026-02-24 16:17:12.881 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:12 compute-0 nova_compute[188703]: 2026-02-24 16:17:12.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:13 compute-0 nova_compute[188703]: 2026-02-24 16:17:13.776 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:14 compute-0 nova_compute[188703]: 2026-02-24 16:17:14.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:14 compute-0 nova_compute[188703]: 2026-02-24 16:17:14.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:17:14 compute-0 nova_compute[188703]: 2026-02-24 16:17:14.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:17:15 compute-0 nova_compute[188703]: 2026-02-24 16:17:15.154 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:17:15 compute-0 nova_compute[188703]: 2026-02-24 16:17:15.155 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:17:15 compute-0 nova_compute[188703]: 2026-02-24 16:17:15.155 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:17:15 compute-0 nova_compute[188703]: 2026-02-24 16:17:15.156 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:17:16 compute-0 podman[257276]: 2026-02-24 16:17:16.096099041 +0000 UTC m=+0.057259795 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 16:17:16 compute-0 podman[257277]: 2026-02-24 16:17:16.136328021 +0000 UTC m=+0.091436801 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 24 16:17:16 compute-0 nova_compute[188703]: 2026-02-24 16:17:16.139 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:17:16 compute-0 nova_compute[188703]: 2026-02-24 16:17:16.161 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:17:16 compute-0 nova_compute[188703]: 2026-02-24 16:17:16.161 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:17:16 compute-0 nova_compute[188703]: 2026-02-24 16:17:16.162 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:16 compute-0 nova_compute[188703]: 2026-02-24 16:17:16.162 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:17:16 compute-0 nova_compute[188703]: 2026-02-24 16:17:16.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:16 compute-0 nova_compute[188703]: 2026-02-24 16:17:16.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:16 compute-0 nova_compute[188703]: 2026-02-24 16:17:16.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:17 compute-0 nova_compute[188703]: 2026-02-24 16:17:17.884 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:18 compute-0 nova_compute[188703]: 2026-02-24 16:17:18.778 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:18 compute-0 nova_compute[188703]: 2026-02-24 16:17:18.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:18 compute-0 nova_compute[188703]: 2026-02-24 16:17:18.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:18 compute-0 nova_compute[188703]: 2026-02-24 16:17:18.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 16:17:19 compute-0 nova_compute[188703]: 2026-02-24 16:17:19.952 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:22 compute-0 podman[257320]: 2026-02-24 16:17:22.146346905 +0000 UTC m=+0.104822895 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:17:22 compute-0 nova_compute[188703]: 2026-02-24 16:17:22.887 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:22 compute-0 nova_compute[188703]: 2026-02-24 16:17:22.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:23 compute-0 nova_compute[188703]: 2026-02-24 16:17:23.782 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:23 compute-0 nova_compute[188703]: 2026-02-24 16:17:23.955 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:23 compute-0 nova_compute[188703]: 2026-02-24 16:17:23.989 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:17:23 compute-0 nova_compute[188703]: 2026-02-24 16:17:23.990 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:17:23 compute-0 nova_compute[188703]: 2026-02-24 16:17:23.990 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:17:23 compute-0 nova_compute[188703]: 2026-02-24 16:17:23.991 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.095 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.184 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.185 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.230 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.238 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.327 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.329 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.391 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.769 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.771 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4858MB free_disk=72.0994873046875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.771 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:17:24 compute-0 nova_compute[188703]: 2026-02-24 16:17:24.772 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:17:25 compute-0 nova_compute[188703]: 2026-02-24 16:17:25.037 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:17:25 compute-0 nova_compute[188703]: 2026-02-24 16:17:25.037 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:17:25 compute-0 nova_compute[188703]: 2026-02-24 16:17:25.038 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:17:25 compute-0 nova_compute[188703]: 2026-02-24 16:17:25.038 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:17:25 compute-0 nova_compute[188703]: 2026-02-24 16:17:25.185 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:17:25 compute-0 nova_compute[188703]: 2026-02-24 16:17:25.197 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:17:25 compute-0 nova_compute[188703]: 2026-02-24 16:17:25.199 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:17:25 compute-0 nova_compute[188703]: 2026-02-24 16:17:25.199 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:17:27 compute-0 nova_compute[188703]: 2026-02-24 16:17:27.891 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:28 compute-0 nova_compute[188703]: 2026-02-24 16:17:28.783 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:28 compute-0 nova_compute[188703]: 2026-02-24 16:17:28.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:17:28 compute-0 nova_compute[188703]: 2026-02-24 16:17:28.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 16:17:28 compute-0 nova_compute[188703]: 2026-02-24 16:17:28.960 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 16:17:29 compute-0 podman[204685]: time="2026-02-24T16:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:17:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:17:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 24 16:17:31 compute-0 podman[257356]: 2026-02-24 16:17:31.130540827 +0000 UTC m=+0.088386049 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 16:17:31 compute-0 podman[257357]: 2026-02-24 16:17:31.162509054 +0000 UTC m=+0.120901800 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 16:17:31 compute-0 openstack_network_exporter[207830]: ERROR   16:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:17:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:17:31 compute-0 openstack_network_exporter[207830]: ERROR   16:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:17:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:17:32 compute-0 nova_compute[188703]: 2026-02-24 16:17:32.895 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:33 compute-0 nova_compute[188703]: 2026-02-24 16:17:33.797 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:36 compute-0 podman[257396]: 2026-02-24 16:17:36.121920247 +0000 UTC m=+0.078076189 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Feb 24 16:17:36 compute-0 podman[257397]: 2026-02-24 16:17:36.160840013 +0000 UTC m=+0.118694291 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 16:17:37 compute-0 nova_compute[188703]: 2026-02-24 16:17:37.898 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:38 compute-0 nova_compute[188703]: 2026-02-24 16:17:38.800 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.840 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.841 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fff7eab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.846 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '25045b6d-8da1-4e43-b027-bab77ff8a2c1', 'name': 'te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.849 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124', 'name': 'te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.849 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T16:17:39.850072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.874 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/memory.usage volume: 43.46484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.899 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/memory.usage volume: 46.25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.900 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T16:17:39.900753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.918 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.919 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.937 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.937 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.938 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.938 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.938 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T16:17:39.938734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.942 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.947 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.947 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.947 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.947 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.947 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.947 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.947 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.948 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.948 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.949 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes.delta volume: 294 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.949 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.949 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.949 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.949 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.950 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.950 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.950 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.950 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.950 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.950 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.950 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.951 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.951 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.951 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.951 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.952 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.952 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.952 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.952 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T16:17:39.947807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T16:17:39.949007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T16:17:39.950136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T16:17:39.951191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:39.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T16:17:39.952710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.003 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 29162496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.004 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.055 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 30558720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.055 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.057 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.057 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.058 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T16:17:40.057381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.059 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 997011743 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.059 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 79286088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.060 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 1084653831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.060 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 115477352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T16:17:40.059258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.061 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.061 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/cpu volume: 174240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.062 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/cpu volume: 332930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.063 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.063 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 1040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.064 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.064 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 1095 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T16:17:40.061485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T16:17:40.063609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.064 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.065 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.066 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.066 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T16:17:40.065913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.067 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.068 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T16:17:40.067581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.069 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.069 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.070 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T16:17:40.069219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.070 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.071 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.071 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 3590146451 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.071 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.072 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 3879691688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T16:17:40.071563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.072 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.073 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.074 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.074 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T16:17:40.073582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.074 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.075 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.076 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 327 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.076 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.076 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T16:17:40.075836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.076 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.077 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.078 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.078 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T16:17:40.077933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.079 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.080 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T16:17:40.079767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.080 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes.delta volume: 294 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.081 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.082 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T16:17:40.081576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.084 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T16:17:40.083214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.085 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.085 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.086 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T16:17:40.084956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.086 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.087 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T16:17:40.086477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:17:40.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:17:42 compute-0 podman[257436]: 2026-02-24 16:17:42.107923419 +0000 UTC m=+0.073789203 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, distribution-scope=public, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal)
Feb 24 16:17:42 compute-0 nova_compute[188703]: 2026-02-24 16:17:42.902 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:43 compute-0 nova_compute[188703]: 2026-02-24 16:17:43.803 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:47 compute-0 podman[257455]: 2026-02-24 16:17:47.124331138 +0000 UTC m=+0.081273756 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 16:17:47 compute-0 podman[257456]: 2026-02-24 16:17:47.171342914 +0000 UTC m=+0.126773241 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 24 16:17:47 compute-0 nova_compute[188703]: 2026-02-24 16:17:47.905 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:48 compute-0 nova_compute[188703]: 2026-02-24 16:17:48.807 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:52 compute-0 nova_compute[188703]: 2026-02-24 16:17:52.908 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:53 compute-0 podman[257496]: 2026-02-24 16:17:53.049103238 +0000 UTC m=+0.103325034 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:17:53 compute-0 nova_compute[188703]: 2026-02-24 16:17:53.812 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:17:55.748 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:17:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:17:55.748 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:17:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:17:55.749 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:17:57 compute-0 nova_compute[188703]: 2026-02-24 16:17:57.912 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:58 compute-0 nova_compute[188703]: 2026-02-24 16:17:58.812 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:17:59 compute-0 podman[204685]: time="2026-02-24T16:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:17:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:17:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Feb 24 16:18:01 compute-0 openstack_network_exporter[207830]: ERROR   16:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:18:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:18:01 compute-0 openstack_network_exporter[207830]: ERROR   16:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:18:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:18:02 compute-0 podman[257517]: 2026-02-24 16:18:02.125179354 +0000 UTC m=+0.084632887 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 16:18:02 compute-0 podman[257518]: 2026-02-24 16:18:02.168997362 +0000 UTC m=+0.114659441 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 24 16:18:02 compute-0 nova_compute[188703]: 2026-02-24 16:18:02.917 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:03 compute-0 nova_compute[188703]: 2026-02-24 16:18:03.814 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:07 compute-0 podman[257557]: 2026-02-24 16:18:07.138303215 +0000 UTC m=+0.088093801 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=kepler, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:18:07 compute-0 podman[257558]: 2026-02-24 16:18:07.139454155 +0000 UTC m=+0.083207158 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 16:18:07 compute-0 nova_compute[188703]: 2026-02-24 16:18:07.920 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:08 compute-0 nova_compute[188703]: 2026-02-24 16:18:08.817 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:10 compute-0 nova_compute[188703]: 2026-02-24 16:18:10.960 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:12 compute-0 nova_compute[188703]: 2026-02-24 16:18:12.923 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:13 compute-0 podman[257594]: 2026-02-24 16:18:13.158559964 +0000 UTC m=+0.108105183 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1770267347, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.7, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:18:13 compute-0 nova_compute[188703]: 2026-02-24 16:18:13.821 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:14 compute-0 nova_compute[188703]: 2026-02-24 16:18:14.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:14 compute-0 nova_compute[188703]: 2026-02-24 16:18:14.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:14 compute-0 nova_compute[188703]: 2026-02-24 16:18:14.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:18:16 compute-0 nova_compute[188703]: 2026-02-24 16:18:16.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:16 compute-0 nova_compute[188703]: 2026-02-24 16:18:16.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:18:17 compute-0 nova_compute[188703]: 2026-02-24 16:18:17.240 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:18:17 compute-0 nova_compute[188703]: 2026-02-24 16:18:17.240 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:18:17 compute-0 nova_compute[188703]: 2026-02-24 16:18:17.241 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:18:17 compute-0 nova_compute[188703]: 2026-02-24 16:18:17.927 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:18 compute-0 podman[257616]: 2026-02-24 16:18:18.161425396 +0000 UTC m=+0.114869787 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:18:18 compute-0 podman[257615]: 2026-02-24 16:18:18.168212811 +0000 UTC m=+0.125584537 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 16:18:18 compute-0 nova_compute[188703]: 2026-02-24 16:18:18.821 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:19 compute-0 nova_compute[188703]: 2026-02-24 16:18:19.265 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updating instance_info_cache with network_info: [{"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:18:19 compute-0 nova_compute[188703]: 2026-02-24 16:18:19.285 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:18:19 compute-0 nova_compute[188703]: 2026-02-24 16:18:19.286 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:18:19 compute-0 nova_compute[188703]: 2026-02-24 16:18:19.287 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:19 compute-0 nova_compute[188703]: 2026-02-24 16:18:19.287 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:19 compute-0 nova_compute[188703]: 2026-02-24 16:18:19.288 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:20 compute-0 nova_compute[188703]: 2026-02-24 16:18:20.283 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:21 compute-0 nova_compute[188703]: 2026-02-24 16:18:21.650 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:21 compute-0 nova_compute[188703]: 2026-02-24 16:18:21.689 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Triggering sync for uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 24 16:18:21 compute-0 nova_compute[188703]: 2026-02-24 16:18:21.690 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Triggering sync for uuid 25045b6d-8da1-4e43-b027-bab77ff8a2c1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 24 16:18:21 compute-0 nova_compute[188703]: 2026-02-24 16:18:21.691 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:18:21 compute-0 nova_compute[188703]: 2026-02-24 16:18:21.692 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:18:21 compute-0 nova_compute[188703]: 2026-02-24 16:18:21.693 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:18:21 compute-0 nova_compute[188703]: 2026-02-24 16:18:21.693 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:18:21 compute-0 nova_compute[188703]: 2026-02-24 16:18:21.736 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:18:21 compute-0 nova_compute[188703]: 2026-02-24 16:18:21.738 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:18:22 compute-0 sshd-session[257661]: Connection closed by authenticating user root 64.236.161.24 port 46112 [preauth]
Feb 24 16:18:22 compute-0 nova_compute[188703]: 2026-02-24 16:18:22.932 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:23 compute-0 nova_compute[188703]: 2026-02-24 16:18:23.826 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:24 compute-0 podman[257663]: 2026-02-24 16:18:24.159486175 +0000 UTC m=+0.116946123 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:18:25 compute-0 nova_compute[188703]: 2026-02-24 16:18:25.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:18:25 compute-0 nova_compute[188703]: 2026-02-24 16:18:25.966 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:18:25 compute-0 nova_compute[188703]: 2026-02-24 16:18:25.968 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:18:25 compute-0 nova_compute[188703]: 2026-02-24 16:18:25.969 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:18:25 compute-0 nova_compute[188703]: 2026-02-24 16:18:25.970 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.087 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.149 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.151 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.215 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.226 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.298 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.300 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.355 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.722 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.724 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4853MB free_disk=72.0994873046875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.724 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.725 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.830 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.831 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.831 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.832 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.927 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.945 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.948 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:18:26 compute-0 nova_compute[188703]: 2026-02-24 16:18:26.949 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:18:27 compute-0 nova_compute[188703]: 2026-02-24 16:18:27.936 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:28 compute-0 nova_compute[188703]: 2026-02-24 16:18:28.827 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:29 compute-0 podman[204685]: time="2026-02-24T16:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:18:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:18:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Feb 24 16:18:31 compute-0 openstack_network_exporter[207830]: ERROR   16:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:18:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:18:31 compute-0 openstack_network_exporter[207830]: ERROR   16:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:18:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:18:32 compute-0 nova_compute[188703]: 2026-02-24 16:18:32.939 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:33 compute-0 podman[257698]: 2026-02-24 16:18:33.099769727 +0000 UTC m=+0.061587351 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 24 16:18:33 compute-0 podman[257697]: 2026-02-24 16:18:33.101546035 +0000 UTC m=+0.066674440 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 16:18:33 compute-0 nova_compute[188703]: 2026-02-24 16:18:33.829 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:36 compute-0 sshd-session[257737]: Connection closed by 45.148.10.240 port 48002
Feb 24 16:18:37 compute-0 nova_compute[188703]: 2026-02-24 16:18:37.943 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:38 compute-0 podman[257739]: 2026-02-24 16:18:38.136350284 +0000 UTC m=+0.090174208 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Feb 24 16:18:38 compute-0 podman[257738]: 2026-02-24 16:18:38.143712443 +0000 UTC m=+0.102995585 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Feb 24 16:18:38 compute-0 nova_compute[188703]: 2026-02-24 16:18:38.831 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:42 compute-0 nova_compute[188703]: 2026-02-24 16:18:42.948 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:43 compute-0 nova_compute[188703]: 2026-02-24 16:18:43.833 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:44 compute-0 podman[257778]: 2026-02-24 16:18:44.185682833 +0000 UTC m=+0.140926104 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.7, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7)
Feb 24 16:18:47 compute-0 nova_compute[188703]: 2026-02-24 16:18:47.951 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:48 compute-0 nova_compute[188703]: 2026-02-24 16:18:48.835 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:49 compute-0 podman[257799]: 2026-02-24 16:18:49.128666411 +0000 UTC m=+0.090865566 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute)
Feb 24 16:18:49 compute-0 podman[257800]: 2026-02-24 16:18:49.15224535 +0000 UTC m=+0.113593523 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 24 16:18:52 compute-0 nova_compute[188703]: 2026-02-24 16:18:52.955 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:53 compute-0 nova_compute[188703]: 2026-02-24 16:18:53.838 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:55 compute-0 podman[257845]: 2026-02-24 16:18:55.155671334 +0000 UTC m=+0.104200148 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:18:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:18:55.750 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:18:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:18:55.750 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:18:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:18:55.751 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:18:57 compute-0 nova_compute[188703]: 2026-02-24 16:18:57.960 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:58 compute-0 nova_compute[188703]: 2026-02-24 16:18:58.840 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:18:59 compute-0 podman[204685]: time="2026-02-24T16:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:18:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:18:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Feb 24 16:19:01 compute-0 openstack_network_exporter[207830]: ERROR   16:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:19:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:19:01 compute-0 openstack_network_exporter[207830]: ERROR   16:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:19:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:19:02 compute-0 nova_compute[188703]: 2026-02-24 16:19:02.964 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:03 compute-0 nova_compute[188703]: 2026-02-24 16:19:03.843 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:04 compute-0 podman[257869]: 2026-02-24 16:19:04.115656489 +0000 UTC m=+0.073055612 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 24 16:19:04 compute-0 podman[257868]: 2026-02-24 16:19:04.139172567 +0000 UTC m=+0.099150361 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:19:07 compute-0 nova_compute[188703]: 2026-02-24 16:19:07.967 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:08 compute-0 nova_compute[188703]: 2026-02-24 16:19:08.845 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:09 compute-0 podman[257909]: 2026-02-24 16:19:09.113217017 +0000 UTC m=+0.068931410 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, distribution-scope=public, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9)
Feb 24 16:19:09 compute-0 podman[257910]: 2026-02-24 16:19:09.124999987 +0000 UTC m=+0.083023223 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Feb 24 16:19:11 compute-0 nova_compute[188703]: 2026-02-24 16:19:11.950 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:12 compute-0 nova_compute[188703]: 2026-02-24 16:19:12.971 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:13 compute-0 nova_compute[188703]: 2026-02-24 16:19:13.849 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:14 compute-0 podman[257947]: 2026-02-24 16:19:14.765248648 +0000 UTC m=+0.092197301 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 24 16:19:14 compute-0 nova_compute[188703]: 2026-02-24 16:19:14.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:14 compute-0 nova_compute[188703]: 2026-02-24 16:19:14.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:19:16 compute-0 nova_compute[188703]: 2026-02-24 16:19:16.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:17 compute-0 nova_compute[188703]: 2026-02-24 16:19:17.937 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:17 compute-0 nova_compute[188703]: 2026-02-24 16:19:17.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:17 compute-0 nova_compute[188703]: 2026-02-24 16:19:17.941 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:19:17 compute-0 nova_compute[188703]: 2026-02-24 16:19:17.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:19:17 compute-0 nova_compute[188703]: 2026-02-24 16:19:17.975 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:18 compute-0 nova_compute[188703]: 2026-02-24 16:19:18.271 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:19:18 compute-0 nova_compute[188703]: 2026-02-24 16:19:18.273 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:19:18 compute-0 nova_compute[188703]: 2026-02-24 16:19:18.274 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:19:18 compute-0 nova_compute[188703]: 2026-02-24 16:19:18.275 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:19:18 compute-0 nova_compute[188703]: 2026-02-24 16:19:18.852 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:20 compute-0 podman[257969]: 2026-02-24 16:19:20.135876247 +0000 UTC m=+0.098376129 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 24 16:19:20 compute-0 podman[257970]: 2026-02-24 16:19:20.183575491 +0000 UTC m=+0.138931610 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 24 16:19:20 compute-0 nova_compute[188703]: 2026-02-24 16:19:20.295 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:19:20 compute-0 nova_compute[188703]: 2026-02-24 16:19:20.314 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:19:20 compute-0 nova_compute[188703]: 2026-02-24 16:19:20.314 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:19:20 compute-0 nova_compute[188703]: 2026-02-24 16:19:20.315 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:20 compute-0 nova_compute[188703]: 2026-02-24 16:19:20.315 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:20 compute-0 nova_compute[188703]: 2026-02-24 16:19:20.316 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:22 compute-0 nova_compute[188703]: 2026-02-24 16:19:22.978 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:23 compute-0 nova_compute[188703]: 2026-02-24 16:19:23.853 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:26 compute-0 podman[258015]: 2026-02-24 16:19:26.150647669 +0000 UTC m=+0.104350891 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:19:27 compute-0 nova_compute[188703]: 2026-02-24 16:19:27.313 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:27 compute-0 nova_compute[188703]: 2026-02-24 16:19:27.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:19:27 compute-0 nova_compute[188703]: 2026-02-24 16:19:27.966 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:19:27 compute-0 nova_compute[188703]: 2026-02-24 16:19:27.967 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:19:27 compute-0 nova_compute[188703]: 2026-02-24 16:19:27.967 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:19:27 compute-0 nova_compute[188703]: 2026-02-24 16:19:27.967 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:19:27 compute-0 nova_compute[188703]: 2026-02-24 16:19:27.982 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.056 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.142 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.144 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.209 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.222 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.281 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.283 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.340 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.679 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.681 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4856MB free_disk=72.0994873046875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.681 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.681 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.776 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.776 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.777 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.777 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.842 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.857 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.863 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.866 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:19:28 compute-0 nova_compute[188703]: 2026-02-24 16:19:28.867 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:19:29 compute-0 podman[204685]: time="2026-02-24T16:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:19:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:19:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Feb 24 16:19:31 compute-0 openstack_network_exporter[207830]: ERROR   16:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:19:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:19:31 compute-0 openstack_network_exporter[207830]: ERROR   16:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:19:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:19:32 compute-0 nova_compute[188703]: 2026-02-24 16:19:32.986 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:33 compute-0 nova_compute[188703]: 2026-02-24 16:19:33.857 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:35 compute-0 podman[258052]: 2026-02-24 16:19:35.10306793 +0000 UTC m=+0.067323927 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:19:35 compute-0 podman[258053]: 2026-02-24 16:19:35.121284054 +0000 UTC m=+0.074510912 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 24 16:19:37 compute-0 nova_compute[188703]: 2026-02-24 16:19:37.988 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:38 compute-0 nova_compute[188703]: 2026-02-24 16:19:38.860 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.841 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.841 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.848 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '25045b6d-8da1-4e43-b027-bab77ff8a2c1', 'name': 'te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.851 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124', 'name': 'te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.852 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T16:19:39.852552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.870 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/memory.usage volume: 43.46484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.886 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/memory.usage volume: 46.25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.887 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.887 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.887 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.887 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.888 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T16:19:39.887641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.901 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.902 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.913 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.914 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.914 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.915 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.915 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T16:19:39.915398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.918 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.922 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.922 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.923 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T16:19:39.923323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.923 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.924 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.924 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.924 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.925 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T16:19:39.925059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.925 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.925 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.926 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.926 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.926 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.926 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.926 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.926 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.926 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.927 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T16:19:39.926668) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.927 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.928 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.928 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.928 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T16:19:39.928246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.929 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.929 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T16:19:39.930380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.959 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 29162496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.959 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.989 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 30558720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.989 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.990 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.990 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.990 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.991 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.992 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.992 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T16:19:39.990721) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.993 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.993 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.993 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T16:19:39.993270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.993 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 997011743 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.994 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 79286088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.994 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 1084653831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.994 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 115477352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.995 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.996 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/cpu volume: 293830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T16:19:39.996253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.997 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/cpu volume: 334500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.997 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.997 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 1040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.998 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.998 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 1095 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T16:19:39.997706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.998 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.999 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.999 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:39.999 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.000 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T16:19:39.999674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T16:19:40.001272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.001 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.001 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.002 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.002 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.002 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.002 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.003 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T16:19:40.002444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.003 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T16:19:40.003995) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.004 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 3590146451 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.004 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.004 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 3879691688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.004 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.005 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.005 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.006 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T16:19:40.005678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.006 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.007 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T16:19:40.007623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.008 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 327 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.008 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.008 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.008 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.009 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.009 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.009 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.010 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.010 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.010 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T16:19:40.009292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T16:19:40.010289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.012 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T16:19:40.012005) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.013 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T16:19:40.013110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.014 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.014 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T16:19:40.014342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.015 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.015 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.016 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:19:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:19:40.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T16:19:40.015742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:19:40 compute-0 podman[258092]: 2026-02-24 16:19:40.109270252 +0000 UTC m=+0.061250822 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, container_name=kepler, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:19:40 compute-0 podman[258093]: 2026-02-24 16:19:40.121288428 +0000 UTC m=+0.070740699 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 16:19:42 compute-0 nova_compute[188703]: 2026-02-24 16:19:42.991 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:43 compute-0 nova_compute[188703]: 2026-02-24 16:19:43.864 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:45 compute-0 podman[258128]: 2026-02-24 16:19:45.140793191 +0000 UTC m=+0.091848253 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc.)
Feb 24 16:19:47 compute-0 nova_compute[188703]: 2026-02-24 16:19:47.995 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:48 compute-0 nova_compute[188703]: 2026-02-24 16:19:48.866 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:51 compute-0 podman[258151]: 2026-02-24 16:19:51.14588887 +0000 UTC m=+0.105958836 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825)
Feb 24 16:19:51 compute-0 podman[258152]: 2026-02-24 16:19:51.171594917 +0000 UTC m=+0.123774649 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 24 16:19:53 compute-0 nova_compute[188703]: 2026-02-24 16:19:52.999 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:53 compute-0 nova_compute[188703]: 2026-02-24 16:19:53.869 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:19:55.752 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:19:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:19:55.753 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:19:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:19:55.754 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:19:57 compute-0 podman[258195]: 2026-02-24 16:19:57.113570724 +0000 UTC m=+0.069720832 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:19:58 compute-0 nova_compute[188703]: 2026-02-24 16:19:58.003 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:58 compute-0 nova_compute[188703]: 2026-02-24 16:19:58.872 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:19:59 compute-0 podman[204685]: time="2026-02-24T16:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:19:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:19:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Feb 24 16:20:01 compute-0 openstack_network_exporter[207830]: ERROR   16:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:20:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:20:01 compute-0 openstack_network_exporter[207830]: ERROR   16:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:20:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:20:03 compute-0 nova_compute[188703]: 2026-02-24 16:20:03.007 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:03 compute-0 nova_compute[188703]: 2026-02-24 16:20:03.874 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:06 compute-0 podman[258219]: 2026-02-24 16:20:06.115231321 +0000 UTC m=+0.073664069 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:20:06 compute-0 podman[258220]: 2026-02-24 16:20:06.117367619 +0000 UTC m=+0.073248528 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 24 16:20:08 compute-0 nova_compute[188703]: 2026-02-24 16:20:08.012 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:08 compute-0 nova_compute[188703]: 2026-02-24 16:20:08.877 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:11 compute-0 podman[258262]: 2026-02-24 16:20:11.118223996 +0000 UTC m=+0.077466662 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, io.openshift.expose-services=, config_id=kepler, container_name=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Feb 24 16:20:11 compute-0 podman[258263]: 2026-02-24 16:20:11.144015986 +0000 UTC m=+0.099294494 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 24 16:20:12 compute-0 nova_compute[188703]: 2026-02-24 16:20:12.869 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:20:13 compute-0 nova_compute[188703]: 2026-02-24 16:20:13.016 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:13 compute-0 nova_compute[188703]: 2026-02-24 16:20:13.880 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:15 compute-0 nova_compute[188703]: 2026-02-24 16:20:15.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:20:15 compute-0 nova_compute[188703]: 2026-02-24 16:20:15.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:20:16 compute-0 podman[258299]: 2026-02-24 16:20:16.159720916 +0000 UTC m=+0.108871834 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, config_id=openstack_network_exporter, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Feb 24 16:20:16 compute-0 nova_compute[188703]: 2026-02-24 16:20:16.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:20:18 compute-0 nova_compute[188703]: 2026-02-24 16:20:18.021 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:18 compute-0 nova_compute[188703]: 2026-02-24 16:20:18.880 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:18 compute-0 nova_compute[188703]: 2026-02-24 16:20:18.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:20:19 compute-0 nova_compute[188703]: 2026-02-24 16:20:19.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:20:19 compute-0 nova_compute[188703]: 2026-02-24 16:20:19.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:20:19 compute-0 nova_compute[188703]: 2026-02-24 16:20:19.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:20:20 compute-0 nova_compute[188703]: 2026-02-24 16:20:20.277 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:20:20 compute-0 nova_compute[188703]: 2026-02-24 16:20:20.278 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:20:20 compute-0 nova_compute[188703]: 2026-02-24 16:20:20.279 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:20:21 compute-0 nova_compute[188703]: 2026-02-24 16:20:21.398 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updating instance_info_cache with network_info: [{"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:20:21 compute-0 nova_compute[188703]: 2026-02-24 16:20:21.413 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:20:21 compute-0 nova_compute[188703]: 2026-02-24 16:20:21.413 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:20:21 compute-0 nova_compute[188703]: 2026-02-24 16:20:21.414 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:20:21 compute-0 nova_compute[188703]: 2026-02-24 16:20:21.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:20:22 compute-0 podman[258322]: 2026-02-24 16:20:22.120360346 +0000 UTC m=+0.078873270 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 24 16:20:22 compute-0 podman[258323]: 2026-02-24 16:20:22.210896733 +0000 UTC m=+0.157235706 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 24 16:20:23 compute-0 nova_compute[188703]: 2026-02-24 16:20:23.024 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:23 compute-0 nova_compute[188703]: 2026-02-24 16:20:23.884 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:28 compute-0 nova_compute[188703]: 2026-02-24 16:20:28.027 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:28 compute-0 podman[258367]: 2026-02-24 16:20:28.131454218 +0000 UTC m=+0.081706529 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:20:28 compute-0 nova_compute[188703]: 2026-02-24 16:20:28.888 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:28 compute-0 nova_compute[188703]: 2026-02-24 16:20:28.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:20:28 compute-0 nova_compute[188703]: 2026-02-24 16:20:28.979 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:20:28 compute-0 nova_compute[188703]: 2026-02-24 16:20:28.980 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:20:28 compute-0 nova_compute[188703]: 2026-02-24 16:20:28.980 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:20:28 compute-0 nova_compute[188703]: 2026-02-24 16:20:28.981 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.066 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.122 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.123 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.176 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.184 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.234 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.236 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.285 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.672 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.674 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4858MB free_disk=72.09959411621094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.675 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.676 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:20:29 compute-0 podman[204685]: time="2026-02-24T16:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:20:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:20:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4391 "" "Go-http-client/1.1"
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.929 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.930 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.930 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.931 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:20:29 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.980 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:20:30 compute-0 nova_compute[188703]: 2026-02-24 16:20:29.999 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:20:30 compute-0 nova_compute[188703]: 2026-02-24 16:20:30.001 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:20:30 compute-0 nova_compute[188703]: 2026-02-24 16:20:30.001 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:20:31 compute-0 openstack_network_exporter[207830]: ERROR   16:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:20:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:20:31 compute-0 openstack_network_exporter[207830]: ERROR   16:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:20:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:20:33 compute-0 nova_compute[188703]: 2026-02-24 16:20:33.032 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:33 compute-0 nova_compute[188703]: 2026-02-24 16:20:33.889 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:36 compute-0 sshd-session[258403]: Connection closed by authenticating user root 172.214.45.193 port 24584 [preauth]
Feb 24 16:20:37 compute-0 podman[258405]: 2026-02-24 16:20:37.09394927 +0000 UTC m=+0.052996939 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 16:20:37 compute-0 podman[258406]: 2026-02-24 16:20:37.11536396 +0000 UTC m=+0.070358689 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true)
Feb 24 16:20:38 compute-0 nova_compute[188703]: 2026-02-24 16:20:38.036 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:38 compute-0 nova_compute[188703]: 2026-02-24 16:20:38.892 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:42 compute-0 podman[258448]: 2026-02-24 16:20:42.117257134 +0000 UTC m=+0.076793244 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public)
Feb 24 16:20:42 compute-0 podman[258449]: 2026-02-24 16:20:42.159385427 +0000 UTC m=+0.112988226 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 24 16:20:43 compute-0 nova_compute[188703]: 2026-02-24 16:20:43.040 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:43 compute-0 nova_compute[188703]: 2026-02-24 16:20:43.895 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:47 compute-0 podman[258485]: 2026-02-24 16:20:47.14244551 +0000 UTC m=+0.088885811 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, distribution-scope=public, release=1770267347, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 24 16:20:48 compute-0 nova_compute[188703]: 2026-02-24 16:20:48.044 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:48 compute-0 nova_compute[188703]: 2026-02-24 16:20:48.897 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:50 compute-0 sshd-session[258504]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 16:20:53 compute-0 nova_compute[188703]: 2026-02-24 16:20:53.047 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:53 compute-0 podman[258507]: 2026-02-24 16:20:53.146667679 +0000 UTC m=+0.102164893 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 24 16:20:53 compute-0 podman[258508]: 2026-02-24 16:20:53.181987627 +0000 UTC m=+0.137240294 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 24 16:20:53 compute-0 nova_compute[188703]: 2026-02-24 16:20:53.899 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:20:55.753 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:20:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:20:55.754 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:20:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:20:55.754 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:20:58 compute-0 nova_compute[188703]: 2026-02-24 16:20:58.051 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:58 compute-0 nova_compute[188703]: 2026-02-24 16:20:58.901 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:20:59 compute-0 podman[258567]: 2026-02-24 16:20:59.115770851 +0000 UTC m=+0.077666449 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:20:59 compute-0 podman[204685]: time="2026-02-24T16:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:20:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:20:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Feb 24 16:21:01 compute-0 openstack_network_exporter[207830]: ERROR   16:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:21:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:21:01 compute-0 openstack_network_exporter[207830]: ERROR   16:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:21:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:21:03 compute-0 nova_compute[188703]: 2026-02-24 16:21:03.055 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:03 compute-0 nova_compute[188703]: 2026-02-24 16:21:03.904 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:08 compute-0 nova_compute[188703]: 2026-02-24 16:21:08.059 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:08 compute-0 podman[258591]: 2026-02-24 16:21:08.129528457 +0000 UTC m=+0.088064000 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:21:08 compute-0 podman[258592]: 2026-02-24 16:21:08.164500606 +0000 UTC m=+0.113218883 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb 24 16:21:08 compute-0 nova_compute[188703]: 2026-02-24 16:21:08.908 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:13 compute-0 nova_compute[188703]: 2026-02-24 16:21:13.063 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:13 compute-0 podman[258634]: 2026-02-24 16:21:13.153965472 +0000 UTC m=+0.109763478 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, config_id=kepler, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 16:21:13 compute-0 podman[258635]: 2026-02-24 16:21:13.202692044 +0000 UTC m=+0.155763246 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 24 16:21:13 compute-0 nova_compute[188703]: 2026-02-24 16:21:13.911 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:14 compute-0 nova_compute[188703]: 2026-02-24 16:21:14.002 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:17 compute-0 nova_compute[188703]: 2026-02-24 16:21:17.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:17 compute-0 nova_compute[188703]: 2026-02-24 16:21:17.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:17 compute-0 nova_compute[188703]: 2026-02-24 16:21:17.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:21:18 compute-0 nova_compute[188703]: 2026-02-24 16:21:18.067 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:18 compute-0 podman[258672]: 2026-02-24 16:21:18.183235079 +0000 UTC m=+0.133539803 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., version=9.7, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, vcs-type=git, architecture=x86_64, build-date=2026-02-05T04:57:10Z)
Feb 24 16:21:18 compute-0 nova_compute[188703]: 2026-02-24 16:21:18.915 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:19 compute-0 nova_compute[188703]: 2026-02-24 16:21:19.945 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:19 compute-0 nova_compute[188703]: 2026-02-24 16:21:19.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:21:19 compute-0 nova_compute[188703]: 2026-02-24 16:21:19.946 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:21:20 compute-0 nova_compute[188703]: 2026-02-24 16:21:20.302 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:21:20 compute-0 nova_compute[188703]: 2026-02-24 16:21:20.303 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:21:20 compute-0 nova_compute[188703]: 2026-02-24 16:21:20.303 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:21:20 compute-0 nova_compute[188703]: 2026-02-24 16:21:20.303 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:21:21 compute-0 nova_compute[188703]: 2026-02-24 16:21:21.670 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:21:21 compute-0 nova_compute[188703]: 2026-02-24 16:21:21.684 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:21:21 compute-0 nova_compute[188703]: 2026-02-24 16:21:21.685 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:21:21 compute-0 nova_compute[188703]: 2026-02-24 16:21:21.686 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:21 compute-0 nova_compute[188703]: 2026-02-24 16:21:21.686 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:23 compute-0 nova_compute[188703]: 2026-02-24 16:21:23.072 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:23 compute-0 nova_compute[188703]: 2026-02-24 16:21:23.680 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:23 compute-0 nova_compute[188703]: 2026-02-24 16:21:23.918 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:23 compute-0 nova_compute[188703]: 2026-02-24 16:21:23.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:24 compute-0 podman[258694]: 2026-02-24 16:21:24.174791926 +0000 UTC m=+0.120843468 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Feb 24 16:21:24 compute-0 podman[258695]: 2026-02-24 16:21:24.214766451 +0000 UTC m=+0.153085703 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 24 16:21:28 compute-0 nova_compute[188703]: 2026-02-24 16:21:28.077 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:28 compute-0 nova_compute[188703]: 2026-02-24 16:21:28.920 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:28 compute-0 nova_compute[188703]: 2026-02-24 16:21:28.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:28 compute-0 nova_compute[188703]: 2026-02-24 16:21:28.971 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:21:28 compute-0 nova_compute[188703]: 2026-02-24 16:21:28.998 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:21:28 compute-0 nova_compute[188703]: 2026-02-24 16:21:28.998 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:21:28 compute-0 nova_compute[188703]: 2026-02-24 16:21:28.998 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:21:28 compute-0 nova_compute[188703]: 2026-02-24 16:21:28.998 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.090 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.165 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.167 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.227 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.237 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.298 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.299 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.377 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:21:29 compute-0 podman[204685]: time="2026-02-24T16:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:21:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:21:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4392 "" "Go-http-client/1.1"
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.830 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.832 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4854MB free_disk=72.09963989257812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.832 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.833 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.927 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.928 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.929 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.929 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.947 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.966 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 16:21:29 compute-0 nova_compute[188703]: 2026-02-24 16:21:29.967 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 16:21:30 compute-0 nova_compute[188703]: 2026-02-24 16:21:30.000 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 16:21:30 compute-0 nova_compute[188703]: 2026-02-24 16:21:30.029 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 16:21:30 compute-0 nova_compute[188703]: 2026-02-24 16:21:30.106 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:21:30 compute-0 nova_compute[188703]: 2026-02-24 16:21:30.131 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:21:30 compute-0 nova_compute[188703]: 2026-02-24 16:21:30.134 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:21:30 compute-0 nova_compute[188703]: 2026-02-24 16:21:30.135 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:21:30 compute-0 podman[258748]: 2026-02-24 16:21:30.140482799 +0000 UTC m=+0.094769982 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:21:31 compute-0 openstack_network_exporter[207830]: ERROR   16:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:21:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:21:31 compute-0 openstack_network_exporter[207830]: ERROR   16:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:21:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:21:33 compute-0 nova_compute[188703]: 2026-02-24 16:21:33.081 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:33 compute-0 nova_compute[188703]: 2026-02-24 16:21:33.923 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:38 compute-0 nova_compute[188703]: 2026-02-24 16:21:38.087 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:38 compute-0 nova_compute[188703]: 2026-02-24 16:21:38.925 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:39 compute-0 podman[258771]: 2026-02-24 16:21:39.134241742 +0000 UTC m=+0.090203299 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:21:39 compute-0 podman[258772]: 2026-02-24 16:21:39.145849546 +0000 UTC m=+0.090503586 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.841 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.842 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.850 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '25045b6d-8da1-4e43-b027-bab77ff8a2c1', 'name': 'te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.853 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124', 'name': 'te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.853 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T16:21:39.853499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.876 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/memory.usage volume: 42.30859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.901 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/memory.usage volume: 46.25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.902 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.902 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.902 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T16:21:39.902807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.918 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.919 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.932 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.932 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.933 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T16:21:39.933425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.936 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.939 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.939 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.940 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.940 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.940 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.941 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.941 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T16:21:39.940371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.941 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.941 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.942 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.942 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.942 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.942 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.942 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T16:21:39.941532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.942 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.942 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.943 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.943 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.943 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.943 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T16:21:39.942637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.944 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.944 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.944 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.944 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.945 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T16:21:39.943746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T16:21:39.945292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.985 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 30386688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:39.986 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.031 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 30558720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.032 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.033 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.033 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.034 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.034 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T16:21:40.033832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.035 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.035 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 1040585010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T16:21:40.035640) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.036 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 90904603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.036 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 1084653831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.036 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 115477352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.037 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.038 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/cpu volume: 333500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.038 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/cpu volume: 336130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T16:21:40.038391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.039 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.040 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 1086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.040 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.041 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 1095 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T16:21:40.040186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.041 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.042 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.042 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.043 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T16:21:40.042677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.043 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.044 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T16:21:40.044256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.044 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.045 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.046 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.046 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T16:21:40.045880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.046 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.047 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.047 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.048 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 3697870644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T16:21:40.048283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.048 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.049 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 3879691688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.049 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.050 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.050 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T16:21:40.050782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.051 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.051 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.052 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.052 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.053 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T16:21:40.053339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.053 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.054 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.054 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.055 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.056 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T16:21:40.056022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.056 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.057 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.057 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T16:21:40.057736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.058 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T16:21:40.059837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.061 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T16:21:40.061360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.063 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.063 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T16:21:40.063538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.065 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T16:21:40.064917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.065 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:21:40.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:21:43 compute-0 nova_compute[188703]: 2026-02-24 16:21:43.091 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:43 compute-0 nova_compute[188703]: 2026-02-24 16:21:43.928 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:44 compute-0 podman[258816]: 2026-02-24 16:21:44.155048484 +0000 UTC m=+0.107481635 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, distribution-scope=public, version=9.4, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler)
Feb 24 16:21:44 compute-0 podman[258817]: 2026-02-24 16:21:44.186875798 +0000 UTC m=+0.132678280 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 16:21:48 compute-0 nova_compute[188703]: 2026-02-24 16:21:48.096 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:48 compute-0 nova_compute[188703]: 2026-02-24 16:21:48.930 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:49 compute-0 podman[258855]: 2026-02-24 16:21:49.177939978 +0000 UTC m=+0.122367281 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, architecture=x86_64)
Feb 24 16:21:53 compute-0 nova_compute[188703]: 2026-02-24 16:21:53.101 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:53 compute-0 nova_compute[188703]: 2026-02-24 16:21:53.935 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:55 compute-0 podman[258877]: 2026-02-24 16:21:55.137949835 +0000 UTC m=+0.097905766 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 24 16:21:55 compute-0 podman[258878]: 2026-02-24 16:21:55.149827507 +0000 UTC m=+0.104795574 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:21:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:21:55.754 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:21:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:21:55.755 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:21:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:21:55.756 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:21:58 compute-0 nova_compute[188703]: 2026-02-24 16:21:58.104 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:58 compute-0 nova_compute[188703]: 2026-02-24 16:21:58.938 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:21:59 compute-0 podman[204685]: time="2026-02-24T16:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:21:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:21:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Feb 24 16:22:01 compute-0 podman[258921]: 2026-02-24 16:22:01.136128155 +0000 UTC m=+0.085285015 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:22:01 compute-0 openstack_network_exporter[207830]: ERROR   16:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:22:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:22:01 compute-0 openstack_network_exporter[207830]: ERROR   16:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:22:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:22:03 compute-0 nova_compute[188703]: 2026-02-24 16:22:03.107 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:03 compute-0 nova_compute[188703]: 2026-02-24 16:22:03.941 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:08 compute-0 nova_compute[188703]: 2026-02-24 16:22:08.111 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:08 compute-0 nova_compute[188703]: 2026-02-24 16:22:08.944 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:10 compute-0 podman[258943]: 2026-02-24 16:22:10.129228866 +0000 UTC m=+0.072556169 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:22:10 compute-0 podman[258944]: 2026-02-24 16:22:10.177049263 +0000 UTC m=+0.111013463 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:22:13 compute-0 nova_compute[188703]: 2026-02-24 16:22:13.114 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:13 compute-0 nova_compute[188703]: 2026-02-24 16:22:13.950 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:14 compute-0 podman[258987]: 2026-02-24 16:22:14.793247876 +0000 UTC m=+0.103981152 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:22:14 compute-0 podman[258986]: 2026-02-24 16:22:14.81296664 +0000 UTC m=+0.127339465 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 24 16:22:15 compute-0 nova_compute[188703]: 2026-02-24 16:22:15.108 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:17 compute-0 nova_compute[188703]: 2026-02-24 16:22:17.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:17 compute-0 nova_compute[188703]: 2026-02-24 16:22:17.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:22:18 compute-0 nova_compute[188703]: 2026-02-24 16:22:18.119 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:18 compute-0 nova_compute[188703]: 2026-02-24 16:22:18.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:18 compute-0 nova_compute[188703]: 2026-02-24 16:22:18.954 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:20 compute-0 podman[259026]: 2026-02-24 16:22:20.15389503 +0000 UTC m=+0.107232309 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, release=1770267347, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 16:22:21 compute-0 nova_compute[188703]: 2026-02-24 16:22:21.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:21 compute-0 nova_compute[188703]: 2026-02-24 16:22:21.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:22:23 compute-0 nova_compute[188703]: 2026-02-24 16:22:23.124 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:23 compute-0 nova_compute[188703]: 2026-02-24 16:22:23.340 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:22:23 compute-0 nova_compute[188703]: 2026-02-24 16:22:23.341 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:22:23 compute-0 nova_compute[188703]: 2026-02-24 16:22:23.341 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:22:23 compute-0 nova_compute[188703]: 2026-02-24 16:22:23.956 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:25 compute-0 nova_compute[188703]: 2026-02-24 16:22:25.633 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updating instance_info_cache with network_info: [{"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:22:25 compute-0 nova_compute[188703]: 2026-02-24 16:22:25.663 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:22:25 compute-0 nova_compute[188703]: 2026-02-24 16:22:25.663 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:22:25 compute-0 nova_compute[188703]: 2026-02-24 16:22:25.665 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:25 compute-0 nova_compute[188703]: 2026-02-24 16:22:25.665 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:25 compute-0 nova_compute[188703]: 2026-02-24 16:22:25.666 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:26 compute-0 podman[259050]: 2026-02-24 16:22:26.173901661 +0000 UTC m=+0.122457893 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Feb 24 16:22:26 compute-0 podman[259049]: 2026-02-24 16:22:26.182928305 +0000 UTC m=+0.126379969 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Feb 24 16:22:26 compute-0 sshd-session[259047]: Invalid user sol from 45.148.10.240 port 60004
Feb 24 16:22:26 compute-0 sshd-session[259047]: Connection closed by invalid user sol 45.148.10.240 port 60004 [preauth]
Feb 24 16:22:27 compute-0 nova_compute[188703]: 2026-02-24 16:22:27.661 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:28 compute-0 nova_compute[188703]: 2026-02-24 16:22:28.130 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:28 compute-0 nova_compute[188703]: 2026-02-24 16:22:28.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:28 compute-0 nova_compute[188703]: 2026-02-24 16:22:28.959 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:28 compute-0 nova_compute[188703]: 2026-02-24 16:22:28.976 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:22:28 compute-0 nova_compute[188703]: 2026-02-24 16:22:28.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:22:28 compute-0 nova_compute[188703]: 2026-02-24 16:22:28.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:22:28 compute-0 nova_compute[188703]: 2026-02-24 16:22:28.978 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.085 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.162 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.164 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.260 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.270 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.362 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.363 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.417 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:22:29 compute-0 podman[204685]: time="2026-02-24T16:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:22:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:22:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.799 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.801 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4854MB free_disk=72.09963989257812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.802 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.803 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.930 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.931 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.932 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:22:29 compute-0 nova_compute[188703]: 2026-02-24 16:22:29.932 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:22:30 compute-0 nova_compute[188703]: 2026-02-24 16:22:30.042 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:22:30 compute-0 nova_compute[188703]: 2026-02-24 16:22:30.058 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:22:30 compute-0 nova_compute[188703]: 2026-02-24 16:22:30.060 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:22:30 compute-0 nova_compute[188703]: 2026-02-24 16:22:30.060 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:22:30 compute-0 nova_compute[188703]: 2026-02-24 16:22:30.061 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:30 compute-0 nova_compute[188703]: 2026-02-24 16:22:30.061 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 16:22:30 compute-0 nova_compute[188703]: 2026-02-24 16:22:30.076 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:31 compute-0 openstack_network_exporter[207830]: ERROR   16:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:22:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:22:31 compute-0 openstack_network_exporter[207830]: ERROR   16:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:22:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:22:31 compute-0 nova_compute[188703]: 2026-02-24 16:22:31.968 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:22:31 compute-0 nova_compute[188703]: 2026-02-24 16:22:31.969 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 16:22:31 compute-0 nova_compute[188703]: 2026-02-24 16:22:31.986 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 16:22:32 compute-0 podman[259105]: 2026-02-24 16:22:32.143222348 +0000 UTC m=+0.093919538 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:22:33 compute-0 nova_compute[188703]: 2026-02-24 16:22:33.136 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:33 compute-0 nova_compute[188703]: 2026-02-24 16:22:33.964 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:38 compute-0 nova_compute[188703]: 2026-02-24 16:22:38.139 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:38 compute-0 nova_compute[188703]: 2026-02-24 16:22:38.970 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:41 compute-0 podman[259130]: 2026-02-24 16:22:41.133325639 +0000 UTC m=+0.077324058 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:22:41 compute-0 podman[259129]: 2026-02-24 16:22:41.138986183 +0000 UTC m=+0.085303485 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:22:43 compute-0 nova_compute[188703]: 2026-02-24 16:22:43.142 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:43 compute-0 nova_compute[188703]: 2026-02-24 16:22:43.972 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:45 compute-0 podman[259168]: 2026-02-24 16:22:45.171375399 +0000 UTC m=+0.120427178 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, release=1214.1726694543, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9)
Feb 24 16:22:45 compute-0 podman[259169]: 2026-02-24 16:22:45.173765204 +0000 UTC m=+0.117808537 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0)
Feb 24 16:22:48 compute-0 nova_compute[188703]: 2026-02-24 16:22:48.148 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:48 compute-0 nova_compute[188703]: 2026-02-24 16:22:48.975 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:51 compute-0 podman[259207]: 2026-02-24 16:22:51.155139842 +0000 UTC m=+0.105359010 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.7, config_id=openstack_network_exporter)
Feb 24 16:22:53 compute-0 nova_compute[188703]: 2026-02-24 16:22:53.154 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:53 compute-0 nova_compute[188703]: 2026-02-24 16:22:53.979 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:22:55.756 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:22:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:22:55.757 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:22:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:22:55.758 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:22:57 compute-0 podman[259229]: 2026-02-24 16:22:57.182652719 +0000 UTC m=+0.139366501 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0)
Feb 24 16:22:57 compute-0 podman[259230]: 2026-02-24 16:22:57.245012022 +0000 UTC m=+0.194842915 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 24 16:22:58 compute-0 nova_compute[188703]: 2026-02-24 16:22:58.156 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:58 compute-0 nova_compute[188703]: 2026-02-24 16:22:58.982 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:22:59 compute-0 podman[204685]: time="2026-02-24T16:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:22:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:22:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Feb 24 16:23:01 compute-0 openstack_network_exporter[207830]: ERROR   16:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:23:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:23:01 compute-0 openstack_network_exporter[207830]: ERROR   16:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:23:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:23:03 compute-0 podman[259275]: 2026-02-24 16:23:03.155706584 +0000 UTC m=+0.103133443 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:23:03 compute-0 nova_compute[188703]: 2026-02-24 16:23:03.160 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:03 compute-0 nova_compute[188703]: 2026-02-24 16:23:03.986 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:08 compute-0 nova_compute[188703]: 2026-02-24 16:23:08.165 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:08 compute-0 nova_compute[188703]: 2026-02-24 16:23:08.988 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:12 compute-0 podman[259301]: 2026-02-24 16:23:12.179335786 +0000 UTC m=+0.117011245 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 24 16:23:12 compute-0 podman[259300]: 2026-02-24 16:23:12.182541293 +0000 UTC m=+0.124936428 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 16:23:13 compute-0 nova_compute[188703]: 2026-02-24 16:23:13.172 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:13 compute-0 nova_compute[188703]: 2026-02-24 16:23:13.969 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:13 compute-0 nova_compute[188703]: 2026-02-24 16:23:13.991 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:16 compute-0 podman[259341]: 2026-02-24 16:23:16.177962145 +0000 UTC m=+0.116136852 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 24 16:23:16 compute-0 podman[259340]: 2026-02-24 16:23:16.179720073 +0000 UTC m=+0.121080556 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, config_id=kepler, io.openshift.expose-services=, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 24 16:23:18 compute-0 nova_compute[188703]: 2026-02-24 16:23:18.183 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:18 compute-0 nova_compute[188703]: 2026-02-24 16:23:18.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:18 compute-0 nova_compute[188703]: 2026-02-24 16:23:18.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:18 compute-0 nova_compute[188703]: 2026-02-24 16:23:18.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:23:18 compute-0 nova_compute[188703]: 2026-02-24 16:23:18.994 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:22 compute-0 podman[259381]: 2026-02-24 16:23:22.176900894 +0000 UTC m=+0.116986386 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, release=1770267347, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Feb 24 16:23:23 compute-0 nova_compute[188703]: 2026-02-24 16:23:23.188 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:23 compute-0 nova_compute[188703]: 2026-02-24 16:23:23.940 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:23 compute-0 nova_compute[188703]: 2026-02-24 16:23:23.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:23 compute-0 nova_compute[188703]: 2026-02-24 16:23:23.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:23:23 compute-0 nova_compute[188703]: 2026-02-24 16:23:23.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:23:23 compute-0 nova_compute[188703]: 2026-02-24 16:23:23.997 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:24 compute-0 nova_compute[188703]: 2026-02-24 16:23:24.618 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:23:24 compute-0 nova_compute[188703]: 2026-02-24 16:23:24.620 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:23:24 compute-0 nova_compute[188703]: 2026-02-24 16:23:24.621 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:23:24 compute-0 nova_compute[188703]: 2026-02-24 16:23:24.622 188707 DEBUG nova.objects.instance [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:23:26 compute-0 nova_compute[188703]: 2026-02-24 16:23:26.209 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [{"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:23:26 compute-0 nova_compute[188703]: 2026-02-24 16:23:26.233 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:23:26 compute-0 nova_compute[188703]: 2026-02-24 16:23:26.234 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:23:26 compute-0 nova_compute[188703]: 2026-02-24 16:23:26.235 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:26 compute-0 nova_compute[188703]: 2026-02-24 16:23:26.236 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:26 compute-0 nova_compute[188703]: 2026-02-24 16:23:26.237 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:28 compute-0 podman[259403]: 2026-02-24 16:23:28.160716853 +0000 UTC m=+0.102447665 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:23:28 compute-0 nova_compute[188703]: 2026-02-24 16:23:28.193 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:28 compute-0 podman[259404]: 2026-02-24 16:23:28.260232817 +0000 UTC m=+0.194830227 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 24 16:23:28 compute-0 nova_compute[188703]: 2026-02-24 16:23:28.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:28 compute-0 nova_compute[188703]: 2026-02-24 16:23:28.982 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:23:28 compute-0 nova_compute[188703]: 2026-02-24 16:23:28.983 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:23:28 compute-0 nova_compute[188703]: 2026-02-24 16:23:28.984 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:23:28 compute-0 nova_compute[188703]: 2026-02-24 16:23:28.985 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:23:29 compute-0 nova_compute[188703]: 2026-02-24 16:23:29.002 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:29 compute-0 nova_compute[188703]: 2026-02-24 16:23:29.123 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:23:29 compute-0 nova_compute[188703]: 2026-02-24 16:23:29.217 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:23:29 compute-0 nova_compute[188703]: 2026-02-24 16:23:29.219 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:23:29 compute-0 nova_compute[188703]: 2026-02-24 16:23:29.301 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:23:29 compute-0 nova_compute[188703]: 2026-02-24 16:23:29.312 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:23:29 compute-0 nova_compute[188703]: 2026-02-24 16:23:29.386 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:23:29 compute-0 nova_compute[188703]: 2026-02-24 16:23:29.388 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:23:29 compute-0 nova_compute[188703]: 2026-02-24 16:23:29.465 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:23:29 compute-0 sshd-session[259454]: Connection closed by authenticating user root 52.159.244.83 port 2072 [preauth]
Feb 24 16:23:29 compute-0 podman[204685]: time="2026-02-24T16:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:23:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:23:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.011 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.016 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4862MB free_disk=72.09963989257812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.017 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.018 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.140 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.141 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.142 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.143 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.339 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.356 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.359 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:23:30 compute-0 nova_compute[188703]: 2026-02-24 16:23:30.360 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:23:31 compute-0 openstack_network_exporter[207830]: ERROR   16:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:23:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:23:31 compute-0 openstack_network_exporter[207830]: ERROR   16:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:23:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:23:32 compute-0 nova_compute[188703]: 2026-02-24 16:23:32.356 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:23:33 compute-0 nova_compute[188703]: 2026-02-24 16:23:33.200 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:34 compute-0 nova_compute[188703]: 2026-02-24 16:23:34.003 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:34 compute-0 podman[259462]: 2026-02-24 16:23:34.1518752 +0000 UTC m=+0.093251837 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:23:38 compute-0 nova_compute[188703]: 2026-02-24 16:23:38.210 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:39 compute-0 nova_compute[188703]: 2026-02-24 16:23:39.005 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.842 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.846 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604f63ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.860 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '25045b6d-8da1-4e43-b027-bab77ff8a2c1', 'name': 'te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.866 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124', 'name': 'te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm', 'flavor': {'id': '3303ac8b-27ad-4047-abf8-38e38cd23b6f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c4831085-6e4d-4710-9d1c-263fd9bf6235'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '95c31253f307489ba7dfda7d2823f04a', 'user_id': '69d3eddd2a7d49bf9a69e0ccbb00f957', 'hostId': 'd3925d0732c4f72eeaaf4b2930c16645d86907afa4a1bedb47910c8b', 'status': 'active', 'metadata': {'metering.server_group': '677c1c47-5c86-4e10-835b-809c15045b3b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.866 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.867 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.868 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-24T16:23:39.867138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.904 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/memory.usage volume: 42.30859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.931 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/memory.usage volume: 46.25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.934 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-24T16:23:39.936331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.957 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.958 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.980 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.981 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.983 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.985 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.986 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-24T16:23:39.985014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.993 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:39.999 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.002 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-24T16:23:40.003693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.004 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.005 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.008 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-24T16:23:40.010052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.010 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.012 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.014 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-24T16:23:40.015802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.016 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.017 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.020 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.020 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.021 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-24T16:23:40.020297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.022 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.023 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-24T16:23:40.024456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.076 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 30386688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.076 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.129 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 30558720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.131 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.133 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.135 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-24T16:23:40.134368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.135 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.136 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.137 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.137 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.137 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.138 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.138 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 1040585010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.138 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.latency volume: 90904603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-24T16:23:40.137978) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.139 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 1084653831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.140 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.latency volume: 115477352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.141 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.141 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.142 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.142 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/cpu volume: 335350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-24T16:23:40.142163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.143 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/cpu volume: 338010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.144 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.144 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.144 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.145 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 1086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.146 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-24T16:23:40.144954) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.147 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 1095 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.148 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.151 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.152 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.153 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-24T16:23:40.152225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.155 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.155 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.156 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.156 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-24T16:23:40.156215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.157 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.160 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-24T16:23:40.159704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.161 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.161 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.162 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.163 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.164 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 3697870644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.165 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.165 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 3879691688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.166 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-24T16:23:40.164531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-24T16:23:40.168998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.169 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.169 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.170 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.171 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.171 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.172 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.173 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.173 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.174 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-24T16:23:40.173506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.174 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.174 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.174 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.175 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.175 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.176 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-24T16:23:40.176297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.177 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.177 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.178 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.178 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.178 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.179 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.179 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-24T16:23:40.178238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-24T16:23:40.180409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-24T16:23:40.182025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.183 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.183 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.184 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.184 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.185 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.185 14 DEBUG ceilometer.compute.pollsters [-] 25045b6d-8da1-4e43-b027-bab77ff8a2c1/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.186 14 DEBUG ceilometer.compute.pollsters [-] 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-24T16:23:40.183857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-24T16:23:40.185816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.188 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.188 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.188 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.188 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:40 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:23:40.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:23:43 compute-0 podman[259487]: 2026-02-24 16:23:43.16645721 +0000 UTC m=+0.107441798 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:23:43 compute-0 podman[259488]: 2026-02-24 16:23:43.169060581 +0000 UTC m=+0.110305276 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 24 16:23:43 compute-0 nova_compute[188703]: 2026-02-24 16:23:43.216 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:44 compute-0 nova_compute[188703]: 2026-02-24 16:23:44.008 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:47 compute-0 podman[259529]: 2026-02-24 16:23:47.167696792 +0000 UTC m=+0.108382545 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi)
Feb 24 16:23:47 compute-0 podman[259528]: 2026-02-24 16:23:47.173934209 +0000 UTC m=+0.119734378 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Feb 24 16:23:48 compute-0 nova_compute[188703]: 2026-02-24 16:23:48.220 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:49 compute-0 nova_compute[188703]: 2026-02-24 16:23:49.011 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:53 compute-0 podman[259570]: 2026-02-24 16:23:53.188760745 +0000 UTC m=+0.128128105 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.7, vcs-type=git, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter)
Feb 24 16:23:53 compute-0 nova_compute[188703]: 2026-02-24 16:23:53.225 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:54 compute-0 nova_compute[188703]: 2026-02-24 16:23:54.013 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:23:55.757 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:23:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:23:55.758 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:23:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:23:55.759 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:23:58 compute-0 nova_compute[188703]: 2026-02-24 16:23:58.230 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:59 compute-0 nova_compute[188703]: 2026-02-24 16:23:59.016 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:23:59 compute-0 podman[259591]: 2026-02-24 16:23:59.167720224 +0000 UTC m=+0.112883574 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0)
Feb 24 16:23:59 compute-0 podman[259592]: 2026-02-24 16:23:59.227003457 +0000 UTC m=+0.171401037 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 16:23:59 compute-0 podman[204685]: time="2026-02-24T16:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:23:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:23:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Feb 24 16:24:01 compute-0 openstack_network_exporter[207830]: ERROR   16:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:24:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:24:01 compute-0 openstack_network_exporter[207830]: ERROR   16:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:24:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:24:03 compute-0 nova_compute[188703]: 2026-02-24 16:24:03.235 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:04 compute-0 nova_compute[188703]: 2026-02-24 16:24:04.019 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:05 compute-0 podman[259635]: 2026-02-24 16:24:05.1894755 +0000 UTC m=+0.142594683 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:24:08 compute-0 nova_compute[188703]: 2026-02-24 16:24:08.240 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:09 compute-0 nova_compute[188703]: 2026-02-24 16:24:09.024 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:13 compute-0 nova_compute[188703]: 2026-02-24 16:24:13.245 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:13 compute-0 nova_compute[188703]: 2026-02-24 16:24:13.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:24:14 compute-0 nova_compute[188703]: 2026-02-24 16:24:14.028 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:14 compute-0 podman[259659]: 2026-02-24 16:24:14.163527363 +0000 UTC m=+0.103008048 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 24 16:24:14 compute-0 podman[259658]: 2026-02-24 16:24:14.195336748 +0000 UTC m=+0.139086178 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:24:18 compute-0 podman[259704]: 2026-02-24 16:24:18.177007202 +0000 UTC m=+0.118302801 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:24:18 compute-0 podman[259703]: 2026-02-24 16:24:18.196893826 +0000 UTC m=+0.148185123 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, config_id=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, distribution-scope=public, release-0.7.12=)
Feb 24 16:24:18 compute-0 nova_compute[188703]: 2026-02-24 16:24:18.248 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:18 compute-0 sshd-session[259701]: Invalid user sshadmin from 185.156.73.233 port 37030
Feb 24 16:24:18 compute-0 nova_compute[188703]: 2026-02-24 16:24:18.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:24:18 compute-0 sshd-session[259701]: Connection closed by invalid user sshadmin 185.156.73.233 port 37030 [preauth]
Feb 24 16:24:19 compute-0 nova_compute[188703]: 2026-02-24 16:24:19.030 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:20 compute-0 nova_compute[188703]: 2026-02-24 16:24:20.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:24:20 compute-0 nova_compute[188703]: 2026-02-24 16:24:20.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:24:23 compute-0 nova_compute[188703]: 2026-02-24 16:24:23.252 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:23 compute-0 nova_compute[188703]: 2026-02-24 16:24:23.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:24:24 compute-0 nova_compute[188703]: 2026-02-24 16:24:24.033 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:24 compute-0 podman[259742]: 2026-02-24 16:24:24.174054325 +0000 UTC m=+0.119772650 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, version=9.7, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1770267347, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible)
Feb 24 16:24:24 compute-0 nova_compute[188703]: 2026-02-24 16:24:24.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:24:24 compute-0 nova_compute[188703]: 2026-02-24 16:24:24.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:24:25 compute-0 nova_compute[188703]: 2026-02-24 16:24:25.623 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 24 16:24:25 compute-0 nova_compute[188703]: 2026-02-24 16:24:25.624 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquired lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 24 16:24:25 compute-0 nova_compute[188703]: 2026-02-24 16:24:25.625 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 24 16:24:26 compute-0 nova_compute[188703]: 2026-02-24 16:24:26.989 188707 DEBUG nova.network.neutron [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updating instance_info_cache with network_info: [{"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:24:27 compute-0 nova_compute[188703]: 2026-02-24 16:24:27.008 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Releasing lock "refresh_cache-25045b6d-8da1-4e43-b027-bab77ff8a2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 24 16:24:27 compute-0 nova_compute[188703]: 2026-02-24 16:24:27.008 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 24 16:24:27 compute-0 nova_compute[188703]: 2026-02-24 16:24:27.009 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:24:27 compute-0 nova_compute[188703]: 2026-02-24 16:24:27.010 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:24:28 compute-0 nova_compute[188703]: 2026-02-24 16:24:28.008 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:24:28 compute-0 nova_compute[188703]: 2026-02-24 16:24:28.256 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:29 compute-0 nova_compute[188703]: 2026-02-24 16:24:29.035 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:29 compute-0 podman[204685]: time="2026-02-24T16:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:24:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29237 "" "Go-http-client/1.1"
Feb 24 16:24:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Feb 24 16:24:30 compute-0 podman[259764]: 2026-02-24 16:24:30.187600026 +0000 UTC m=+0.142007288 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 16:24:30 compute-0 podman[259765]: 2026-02-24 16:24:30.195590321 +0000 UTC m=+0.144946617 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 16:24:30 compute-0 nova_compute[188703]: 2026-02-24 16:24:30.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:24:30 compute-0 nova_compute[188703]: 2026-02-24 16:24:30.972 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:30 compute-0 nova_compute[188703]: 2026-02-24 16:24:30.972 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:30 compute-0 nova_compute[188703]: 2026-02-24 16:24:30.973 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:30 compute-0 nova_compute[188703]: 2026-02-24 16:24:30.973 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.088 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.146 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.148 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.210 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.220 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.280 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.281 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.340 188707 DEBUG oslo_concurrency.processutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:24:31 compute-0 openstack_network_exporter[207830]: ERROR   16:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:24:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:24:31 compute-0 openstack_network_exporter[207830]: ERROR   16:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:24:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.871 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.874 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4873MB free_disk=72.09967422485352GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.874 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.875 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.952 188707 DEBUG oslo_concurrency.lockutils [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.954 188707 DEBUG oslo_concurrency.lockutils [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.954 188707 DEBUG oslo_concurrency.lockutils [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.955 188707 DEBUG oslo_concurrency.lockutils [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.956 188707 DEBUG oslo_concurrency.lockutils [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.959 188707 INFO nova.compute.manager [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Terminating instance
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.960 188707 DEBUG nova.compute.manager [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.990 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.991 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.991 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:24:31 compute-0 nova_compute[188703]: 2026-02-24 16:24:31.992 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:24:32 compute-0 kernel: tap06aae3cb-60 (unregistering): left promiscuous mode
Feb 24 16:24:32 compute-0 NetworkManager[56995]: <info>  [1771950272.0084] device (tap06aae3cb-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:24:32 compute-0 ovn_controller[98701]: 2026-02-24T16:24:32Z|00176|binding|INFO|Releasing lport 06aae3cb-60d1-46ff-8ae9-26775338ef60 from this chassis (sb_readonly=0)
Feb 24 16:24:32 compute-0 ovn_controller[98701]: 2026-02-24T16:24:32Z|00177|binding|INFO|Setting lport 06aae3cb-60d1-46ff-8ae9-26775338ef60 down in Southbound
Feb 24 16:24:32 compute-0 ovn_controller[98701]: 2026-02-24T16:24:32Z|00178|binding|INFO|Removing iface tap06aae3cb-60 ovn-installed in OVS
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.022 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.031 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:a3:60 10.100.2.165'], port_security=['fa:16:3e:ab:a3:60 10.100.2.165'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.165/16', 'neutron:device_id': '85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9b818-e146-43d5-9aff-1f87311842d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95c31253f307489ba7dfda7d2823f04a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c332945-b8d3-49ba-8675-a4bd059f5256', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dea1c3bb-7b9c-4930-b640-f5e21cc78102, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=06aae3cb-60d1-46ff-8ae9-26775338ef60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.033 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 06aae3cb-60d1-46ff-8ae9-26775338ef60 in datapath 7ba9b818-e146-43d5-9aff-1f87311842d0 unbound from our chassis
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.035 108026 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9b818-e146-43d5-9aff-1f87311842d0
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.038 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.054 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[97102dbd-1490-4373-ae0e-8d233c9bc16d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.076 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:24:32 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Feb 24 16:24:32 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 7min 22.373s CPU time.
Feb 24 16:24:32 compute-0 systemd-machined[158049]: Machine qemu-11-instance-0000000b terminated.
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.096 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.095 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[c766490e-22ea-433a-8895-190e5b30b34a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.099 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.099 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.102 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[c61f4ad2-760c-43ba-8663-dd06361c30df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.135 242142 DEBUG oslo.privsep.daemon [-] privsep: reply[953d1c83-c9a5-4415-91ff-7fbbce9a901f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.158 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[12b77011-72d0-4247-9220-e8eeafa6f4ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9b818-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:80:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511234, 'reachable_time': 38687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259835, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.173 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[2da2d0e8-947e-4006-9110-652afa30752c]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap7ba9b818-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511245, 'tstamp': 511245}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259836, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9b818-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511248, 'tstamp': 511248}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259836, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.175 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9b818-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.178 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.188 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.190 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9b818-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.190 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.191 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9b818-e0, col_values=(('external_ids', {'iface-id': '0f982f60-a551-4bd9-8329-8decd220388f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:24:32 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:32.191 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.245 188707 INFO nova.virt.libvirt.driver [-] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Instance destroyed successfully.
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.246 188707 DEBUG nova.objects.instance [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lazy-loading 'resources' on Instance uuid 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.261 188707 DEBUG nova.virt.libvirt.vif [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:09:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0770156-asg-6vul734dpnkl-a5h2i7tpq5s2-2o2v72dpa5dm',id=11,image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:10:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='677c1c47-5c86-4e10-835b-809c15045b3b'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95c31253f307489ba7dfda7d2823f04a',ramdisk_id='',reservation_id='r-cidn7nfs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1117509900',owner_user_name='tempest-PrometheusGabbiTest-1117509900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:10:01Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='69d3eddd2a7d49bf9a69e0ccbb00f957',uuid=85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.262 188707 DEBUG nova.network.os_vif_util [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converting VIF {"id": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "address": "fa:16:3e:ab:a3:60", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.165", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06aae3cb-60", "ovs_interfaceid": "06aae3cb-60d1-46ff-8ae9-26775338ef60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.263 188707 DEBUG nova.network.os_vif_util [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:a3:60,bridge_name='br-int',has_traffic_filtering=True,id=06aae3cb-60d1-46ff-8ae9-26775338ef60,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06aae3cb-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.264 188707 DEBUG os_vif [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:a3:60,bridge_name='br-int',has_traffic_filtering=True,id=06aae3cb-60d1-46ff-8ae9-26775338ef60,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06aae3cb-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.267 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.267 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06aae3cb-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.270 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.274 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.275 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.279 188707 INFO os_vif [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:a3:60,bridge_name='br-int',has_traffic_filtering=True,id=06aae3cb-60d1-46ff-8ae9-26775338ef60,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06aae3cb-60')
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.280 188707 INFO nova.virt.libvirt.driver [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Deleting instance files /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124_del
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.281 188707 INFO nova.virt.libvirt.driver [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Deletion of /var/lib/nova/instances/85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124_del complete
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.349 188707 INFO nova.compute.manager [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Took 0.39 seconds to destroy the instance on the hypervisor.
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.350 188707 DEBUG oslo.service.loopingcall [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.351 188707 DEBUG nova.compute.manager [-] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:24:32 compute-0 nova_compute[188703]: 2026-02-24 16:24:32.351 188707 DEBUG nova.network.neutron [-] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.050 188707 DEBUG nova.compute.manager [req-35326318-3d0f-4642-b600-5fc4b305f7d0 req-47891eff-fdc8-4ad5-8af4-0b0bf90e4646 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Received event network-vif-unplugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.051 188707 DEBUG oslo_concurrency.lockutils [req-35326318-3d0f-4642-b600-5fc4b305f7d0 req-47891eff-fdc8-4ad5-8af4-0b0bf90e4646 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.051 188707 DEBUG oslo_concurrency.lockutils [req-35326318-3d0f-4642-b600-5fc4b305f7d0 req-47891eff-fdc8-4ad5-8af4-0b0bf90e4646 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.052 188707 DEBUG oslo_concurrency.lockutils [req-35326318-3d0f-4642-b600-5fc4b305f7d0 req-47891eff-fdc8-4ad5-8af4-0b0bf90e4646 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.052 188707 DEBUG nova.compute.manager [req-35326318-3d0f-4642-b600-5fc4b305f7d0 req-47891eff-fdc8-4ad5-8af4-0b0bf90e4646 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] No waiting events found dispatching network-vif-unplugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.053 188707 DEBUG nova.compute.manager [req-35326318-3d0f-4642-b600-5fc4b305f7d0 req-47891eff-fdc8-4ad5-8af4-0b0bf90e4646 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Received event network-vif-unplugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:24:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:33.126 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:24:33 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:33.128 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.133 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.179 188707 DEBUG nova.network.neutron [-] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.199 188707 INFO nova.compute.manager [-] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Took 0.85 seconds to deallocate network for instance.
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.268 188707 DEBUG oslo_concurrency.lockutils [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.269 188707 DEBUG oslo_concurrency.lockutils [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.362 188707 DEBUG nova.compute.provider_tree [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.383 188707 DEBUG nova.scheduler.client.report [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.405 188707 DEBUG oslo_concurrency.lockutils [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.437 188707 INFO nova.scheduler.client.report [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Deleted allocations for instance 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124
Feb 24 16:24:33 compute-0 nova_compute[188703]: 2026-02-24 16:24:33.500 188707 DEBUG oslo_concurrency.lockutils [None req-4a0a2a61-5612-491f-bb93-3bca4bc95b77 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:34 compute-0 nova_compute[188703]: 2026-02-24 16:24:34.039 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:35 compute-0 nova_compute[188703]: 2026-02-24 16:24:35.110 188707 DEBUG nova.compute.manager [req-f474258e-5a4a-417e-936e-4b706f0c194d req-7f132e55-5980-4665-92e8-cbe443276005 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Received event network-vif-plugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:24:35 compute-0 nova_compute[188703]: 2026-02-24 16:24:35.112 188707 DEBUG oslo_concurrency.lockutils [req-f474258e-5a4a-417e-936e-4b706f0c194d req-7f132e55-5980-4665-92e8-cbe443276005 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:35 compute-0 nova_compute[188703]: 2026-02-24 16:24:35.112 188707 DEBUG oslo_concurrency.lockutils [req-f474258e-5a4a-417e-936e-4b706f0c194d req-7f132e55-5980-4665-92e8-cbe443276005 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:35 compute-0 nova_compute[188703]: 2026-02-24 16:24:35.113 188707 DEBUG oslo_concurrency.lockutils [req-f474258e-5a4a-417e-936e-4b706f0c194d req-7f132e55-5980-4665-92e8-cbe443276005 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:35 compute-0 nova_compute[188703]: 2026-02-24 16:24:35.114 188707 DEBUG nova.compute.manager [req-f474258e-5a4a-417e-936e-4b706f0c194d req-7f132e55-5980-4665-92e8-cbe443276005 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] No waiting events found dispatching network-vif-plugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:24:35 compute-0 nova_compute[188703]: 2026-02-24 16:24:35.114 188707 WARNING nova.compute.manager [req-f474258e-5a4a-417e-936e-4b706f0c194d req-7f132e55-5980-4665-92e8-cbe443276005 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Received unexpected event network-vif-plugged-06aae3cb-60d1-46ff-8ae9-26775338ef60 for instance with vm_state deleted and task_state None.
Feb 24 16:24:35 compute-0 nova_compute[188703]: 2026-02-24 16:24:35.115 188707 DEBUG nova.compute.manager [req-f474258e-5a4a-417e-936e-4b706f0c194d req-7f132e55-5980-4665-92e8-cbe443276005 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Received event network-vif-deleted-06aae3cb-60d1-46ff-8ae9-26775338ef60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:24:36 compute-0 podman[259855]: 2026-02-24 16:24:36.168753244 +0000 UTC m=+0.116193673 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:24:37 compute-0 nova_compute[188703]: 2026-02-24 16:24:37.271 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:39 compute-0 nova_compute[188703]: 2026-02-24 16:24:39.043 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.115 188707 DEBUG oslo_concurrency.lockutils [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.116 188707 DEBUG oslo_concurrency.lockutils [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.117 188707 DEBUG oslo_concurrency.lockutils [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.118 188707 DEBUG oslo_concurrency.lockutils [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.119 188707 DEBUG oslo_concurrency.lockutils [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.121 188707 INFO nova.compute.manager [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Terminating instance
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.123 188707 DEBUG nova.compute.manager [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.131 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:24:40 compute-0 kernel: tap8140cef6-d8 (unregistering): left promiscuous mode
Feb 24 16:24:40 compute-0 NetworkManager[56995]: <info>  [1771950280.1714] device (tap8140cef6-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 24 16:24:40 compute-0 ovn_controller[98701]: 2026-02-24T16:24:40Z|00179|binding|INFO|Releasing lport 8140cef6-d8b1-4098-8470-3077a2c6668d from this chassis (sb_readonly=0)
Feb 24 16:24:40 compute-0 ovn_controller[98701]: 2026-02-24T16:24:40Z|00180|binding|INFO|Setting lport 8140cef6-d8b1-4098-8470-3077a2c6668d down in Southbound
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.181 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.184 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 ovn_controller[98701]: 2026-02-24T16:24:40Z|00181|binding|INFO|Removing iface tap8140cef6-d8 ovn-installed in OVS
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.190 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.198 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:9e:74 10.100.0.204'], port_security=['fa:16:3e:b7:9e:74 10.100.0.204'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.204/16', 'neutron:device_id': '25045b6d-8da1-4e43-b027-bab77ff8a2c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9b818-e146-43d5-9aff-1f87311842d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95c31253f307489ba7dfda7d2823f04a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c332945-b8d3-49ba-8675-a4bd059f5256', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dea1c3bb-7b9c-4930-b640-f5e21cc78102, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>], logical_port=8140cef6-d8b1-4098-8470-3077a2c6668d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4cbd0e4d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.201 108026 INFO neutron.agent.ovn.metadata.agent [-] Port 8140cef6-d8b1-4098-8470-3077a2c6668d in datapath 7ba9b818-e146-43d5-9aff-1f87311842d0 unbound from our chassis
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.203 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.207 108026 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ba9b818-e146-43d5-9aff-1f87311842d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.209 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[4c56e1b0-e29d-4181-970b-3bded8597991]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.210 108026 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0 namespace which is not needed anymore
Feb 24 16:24:40 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Feb 24 16:24:40 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 46.021s CPU time.
Feb 24 16:24:40 compute-0 systemd-machined[158049]: Machine qemu-16-instance-0000000f terminated.
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.357 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.366 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.395 188707 INFO nova.virt.libvirt.driver [-] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Instance destroyed successfully.
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.395 188707 DEBUG nova.objects.instance [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lazy-loading 'resources' on Instance uuid 25045b6d-8da1-4e43-b027-bab77ff8a2c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.408 188707 DEBUG nova.virt.libvirt.vif [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-24T16:14:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0770156-asg-6vul734dpnkl-fup6dm4nqpq6-edbi7cirdwec',id=15,image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-24T16:14:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='677c1c47-5c86-4e10-835b-809c15045b3b'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95c31253f307489ba7dfda7d2823f04a',ramdisk_id='',reservation_id='r-sonqcq10',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c4831085-6e4d-4710-9d1c-263fd9bf6235',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1117509900',owner_user_name='tempest-PrometheusGabbiTest-1117509900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-24T16:14:43Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='69d3eddd2a7d49bf9a69e0ccbb00f957',uuid=25045b6d-8da1-4e43-b027-bab77ff8a2c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.410 188707 DEBUG nova.network.os_vif_util [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converting VIF {"id": "8140cef6-d8b1-4098-8470-3077a2c6668d", "address": "fa:16:3e:b7:9e:74", "network": {"id": "7ba9b818-e146-43d5-9aff-1f87311842d0", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.204", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95c31253f307489ba7dfda7d2823f04a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8140cef6-d8", "ovs_interfaceid": "8140cef6-d8b1-4098-8470-3077a2c6668d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.411 188707 DEBUG nova.network.os_vif_util [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:9e:74,bridge_name='br-int',has_traffic_filtering=True,id=8140cef6-d8b1-4098-8470-3077a2c6668d,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8140cef6-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.412 188707 DEBUG os_vif [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:9e:74,bridge_name='br-int',has_traffic_filtering=True,id=8140cef6-d8b1-4098-8470-3077a2c6668d,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8140cef6-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.413 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.414 188707 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8140cef6-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.416 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.418 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.422 188707 INFO os_vif [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:9e:74,bridge_name='br-int',has_traffic_filtering=True,id=8140cef6-d8b1-4098-8470-3077a2c6668d,network=Network(7ba9b818-e146-43d5-9aff-1f87311842d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8140cef6-d8')
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.423 188707 INFO nova.virt.libvirt.driver [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Deleting instance files /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1_del
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.423 188707 INFO nova.virt.libvirt.driver [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Deletion of /var/lib/nova/instances/25045b6d-8da1-4e43-b027-bab77ff8a2c1_del complete
Feb 24 16:24:40 compute-0 neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0[253699]: [NOTICE]   (253703) : haproxy version is 2.8.14-c23fe91
Feb 24 16:24:40 compute-0 neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0[253699]: [NOTICE]   (253703) : path to executable is /usr/sbin/haproxy
Feb 24 16:24:40 compute-0 neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0[253699]: [WARNING]  (253703) : Exiting Master process...
Feb 24 16:24:40 compute-0 neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0[253699]: [ALERT]    (253703) : Current worker (253705) exited with code 143 (Terminated)
Feb 24 16:24:40 compute-0 neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0[253699]: [WARNING]  (253703) : All workers exited. Exiting... (0)
Feb 24 16:24:40 compute-0 systemd[1]: libpod-b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb.scope: Deactivated successfully.
Feb 24 16:24:40 compute-0 podman[259908]: 2026-02-24 16:24:40.461489748 +0000 UTC m=+0.078893952 container died b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.476 188707 INFO nova.compute.manager [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Took 0.35 seconds to destroy the instance on the hypervisor.
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.477 188707 DEBUG oslo.service.loopingcall [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.484 188707 DEBUG nova.compute.manager [-] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.484 188707 DEBUG nova.network.neutron [-] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 24 16:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb-userdata-shm.mount: Deactivated successfully.
Feb 24 16:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-21d2ee765efcbdddfef977ee2894db02f57524e956848e9e330a34079db25ff5-merged.mount: Deactivated successfully.
Feb 24 16:24:40 compute-0 podman[259908]: 2026-02-24 16:24:40.533829771 +0000 UTC m=+0.151233985 container cleanup b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 24 16:24:40 compute-0 systemd[1]: libpod-conmon-b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb.scope: Deactivated successfully.
Feb 24 16:24:40 compute-0 podman[259948]: 2026-02-24 16:24:40.636858661 +0000 UTC m=+0.068004679 container remove b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.657 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[c884e8e7-ac55-4ef2-8e16-7f5c839049f4]: (4, ('Tue Feb 24 04:24:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0 (b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb)\nb5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb\nTue Feb 24 04:24:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0 (b5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb)\nb5011d024d709e3aa5d99148f6a427fd9a4ffb195552e5b6fb4bf7367f3e4bfb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.661 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[799bfaf5-73ec-4660-8f8f-8eb80c0dde16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.663 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9b818-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.668 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 kernel: tap7ba9b818-e0: left promiscuous mode
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.676 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.678 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.683 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb2a0b2-e525-47cb-901b-55f0f17c6dd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.705 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[f354b281-bfa4-4152-87f6-07fa8d732922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.707 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[d5474148-0bef-46cb-939a-ace71ae785e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.729 242109 DEBUG oslo.privsep.daemon [-] privsep: reply[d5318938-c008-4587-921e-5d76c629d08a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511227, 'reachable_time': 18679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259963, 'error': None, 'target': 'ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ba9b818\x2de146\x2d43d5\x2d9aff\x2d1f87311842d0.mount: Deactivated successfully.
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.736 108551 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ba9b818-e146-43d5-9aff-1f87311842d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 24 16:24:40 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:40.736 108551 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1bf061-6364-44e3-bc19-37a5cc5fea08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.857 188707 DEBUG nova.compute.manager [req-adeb8bc3-7325-44c7-accb-f3716b4aca05 req-f42ef60c-0753-47b6-90a6-c3bb1aa2c905 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Received event network-vif-unplugged-8140cef6-d8b1-4098-8470-3077a2c6668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.857 188707 DEBUG oslo_concurrency.lockutils [req-adeb8bc3-7325-44c7-accb-f3716b4aca05 req-f42ef60c-0753-47b6-90a6-c3bb1aa2c905 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.858 188707 DEBUG oslo_concurrency.lockutils [req-adeb8bc3-7325-44c7-accb-f3716b4aca05 req-f42ef60c-0753-47b6-90a6-c3bb1aa2c905 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.858 188707 DEBUG oslo_concurrency.lockutils [req-adeb8bc3-7325-44c7-accb-f3716b4aca05 req-f42ef60c-0753-47b6-90a6-c3bb1aa2c905 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.858 188707 DEBUG nova.compute.manager [req-adeb8bc3-7325-44c7-accb-f3716b4aca05 req-f42ef60c-0753-47b6-90a6-c3bb1aa2c905 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] No waiting events found dispatching network-vif-unplugged-8140cef6-d8b1-4098-8470-3077a2c6668d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:24:40 compute-0 nova_compute[188703]: 2026-02-24 16:24:40.859 188707 DEBUG nova.compute.manager [req-adeb8bc3-7325-44c7-accb-f3716b4aca05 req-f42ef60c-0753-47b6-90a6-c3bb1aa2c905 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Received event network-vif-unplugged-8140cef6-d8b1-4098-8470-3077a2c6668d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 24 16:24:41 compute-0 nova_compute[188703]: 2026-02-24 16:24:41.072 188707 DEBUG nova.network.neutron [-] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 24 16:24:41 compute-0 nova_compute[188703]: 2026-02-24 16:24:41.100 188707 INFO nova.compute.manager [-] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Took 0.62 seconds to deallocate network for instance.
Feb 24 16:24:41 compute-0 nova_compute[188703]: 2026-02-24 16:24:41.155 188707 DEBUG oslo_concurrency.lockutils [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:41 compute-0 nova_compute[188703]: 2026-02-24 16:24:41.155 188707 DEBUG oslo_concurrency.lockutils [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:41 compute-0 nova_compute[188703]: 2026-02-24 16:24:41.225 188707 DEBUG nova.compute.provider_tree [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:24:41 compute-0 nova_compute[188703]: 2026-02-24 16:24:41.242 188707 DEBUG nova.scheduler.client.report [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:24:41 compute-0 nova_compute[188703]: 2026-02-24 16:24:41.262 188707 DEBUG oslo_concurrency.lockutils [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:41 compute-0 nova_compute[188703]: 2026-02-24 16:24:41.283 188707 INFO nova.scheduler.client.report [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Deleted allocations for instance 25045b6d-8da1-4e43-b027-bab77ff8a2c1
Feb 24 16:24:41 compute-0 nova_compute[188703]: 2026-02-24 16:24:41.340 188707 DEBUG oslo_concurrency.lockutils [None req-f91c601e-5200-437d-9133-d33cded77695 69d3eddd2a7d49bf9a69e0ccbb00f957 95c31253f307489ba7dfda7d2823f04a - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:42 compute-0 nova_compute[188703]: 2026-02-24 16:24:42.957 188707 DEBUG nova.compute.manager [req-2a3e8346-bf1d-436d-9b89-fc36a8a40c37 req-ecd5d6d8-33eb-42ad-b732-17bb53d19e5c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Received event network-vif-plugged-8140cef6-d8b1-4098-8470-3077a2c6668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:24:42 compute-0 nova_compute[188703]: 2026-02-24 16:24:42.957 188707 DEBUG oslo_concurrency.lockutils [req-2a3e8346-bf1d-436d-9b89-fc36a8a40c37 req-ecd5d6d8-33eb-42ad-b732-17bb53d19e5c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Acquiring lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:42 compute-0 nova_compute[188703]: 2026-02-24 16:24:42.957 188707 DEBUG oslo_concurrency.lockutils [req-2a3e8346-bf1d-436d-9b89-fc36a8a40c37 req-ecd5d6d8-33eb-42ad-b732-17bb53d19e5c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:42 compute-0 nova_compute[188703]: 2026-02-24 16:24:42.958 188707 DEBUG oslo_concurrency.lockutils [req-2a3e8346-bf1d-436d-9b89-fc36a8a40c37 req-ecd5d6d8-33eb-42ad-b732-17bb53d19e5c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] Lock "25045b6d-8da1-4e43-b027-bab77ff8a2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:42 compute-0 nova_compute[188703]: 2026-02-24 16:24:42.958 188707 DEBUG nova.compute.manager [req-2a3e8346-bf1d-436d-9b89-fc36a8a40c37 req-ecd5d6d8-33eb-42ad-b732-17bb53d19e5c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] No waiting events found dispatching network-vif-plugged-8140cef6-d8b1-4098-8470-3077a2c6668d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 24 16:24:42 compute-0 nova_compute[188703]: 2026-02-24 16:24:42.959 188707 WARNING nova.compute.manager [req-2a3e8346-bf1d-436d-9b89-fc36a8a40c37 req-ecd5d6d8-33eb-42ad-b732-17bb53d19e5c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Received unexpected event network-vif-plugged-8140cef6-d8b1-4098-8470-3077a2c6668d for instance with vm_state deleted and task_state None.
Feb 24 16:24:42 compute-0 nova_compute[188703]: 2026-02-24 16:24:42.959 188707 DEBUG nova.compute.manager [req-2a3e8346-bf1d-436d-9b89-fc36a8a40c37 req-ecd5d6d8-33eb-42ad-b732-17bb53d19e5c 991171d177514dc5b4bc544064c562a8 c206a7a8f7a24738801346caf27a066f - - default default] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Received event network-vif-deleted-8140cef6-d8b1-4098-8470-3077a2c6668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 24 16:24:44 compute-0 nova_compute[188703]: 2026-02-24 16:24:44.046 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:44 compute-0 podman[259966]: 2026-02-24 16:24:44.832172676 +0000 UTC m=+0.130890848 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 24 16:24:44 compute-0 podman[259965]: 2026-02-24 16:24:44.852710199 +0000 UTC m=+0.158102580 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:24:45 compute-0 nova_compute[188703]: 2026-02-24 16:24:45.418 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:47 compute-0 nova_compute[188703]: 2026-02-24 16:24:47.239 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771950272.238134, 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:24:47 compute-0 nova_compute[188703]: 2026-02-24 16:24:47.240 188707 INFO nova.compute.manager [-] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] VM Stopped (Lifecycle Event)
Feb 24 16:24:47 compute-0 nova_compute[188703]: 2026-02-24 16:24:47.275 188707 DEBUG nova.compute.manager [None req-30cd6022-8ced-403d-8467-1459a6440b52 - - - - - -] [instance: 85e7cedb-f8dc-4bdd-8d84-78c1c3cc2124] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:24:49 compute-0 nova_compute[188703]: 2026-02-24 16:24:49.048 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:49 compute-0 podman[260006]: 2026-02-24 16:24:49.163268021 +0000 UTC m=+0.116357429 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30)
Feb 24 16:24:49 compute-0 podman[260007]: 2026-02-24 16:24:49.189313891 +0000 UTC m=+0.128142955 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0)
Feb 24 16:24:50 compute-0 nova_compute[188703]: 2026-02-24 16:24:50.423 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:54 compute-0 nova_compute[188703]: 2026-02-24 16:24:54.052 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:55 compute-0 podman[260046]: 2026-02-24 16:24:55.126344663 +0000 UTC m=+0.090326128 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, container_name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, version=9.7, release=1770267347, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 16:24:55 compute-0 nova_compute[188703]: 2026-02-24 16:24:55.391 188707 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771950280.389259, 25045b6d-8da1-4e43-b027-bab77ff8a2c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 24 16:24:55 compute-0 nova_compute[188703]: 2026-02-24 16:24:55.391 188707 INFO nova.compute.manager [-] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] VM Stopped (Lifecycle Event)
Feb 24 16:24:55 compute-0 nova_compute[188703]: 2026-02-24 16:24:55.419 188707 DEBUG nova.compute.manager [None req-8097f47b-0574-4609-9f77-51c29fb332a1 - - - - - -] [instance: 25045b6d-8da1-4e43-b027-bab77ff8a2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 24 16:24:55 compute-0 nova_compute[188703]: 2026-02-24 16:24:55.426 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:55.758 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:24:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:55.759 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:24:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:24:55.759 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:24:56 compute-0 nova_compute[188703]: 2026-02-24 16:24:56.261 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:59 compute-0 nova_compute[188703]: 2026-02-24 16:24:59.055 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:24:59 compute-0 podman[204685]: time="2026-02-24T16:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:24:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:24:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3921 "" "Go-http-client/1.1"
Feb 24 16:25:00 compute-0 nova_compute[188703]: 2026-02-24 16:25:00.429 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:01 compute-0 podman[260067]: 2026-02-24 16:25:01.129244897 +0000 UTC m=+0.085705574 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 16:25:01 compute-0 podman[260068]: 2026-02-24 16:25:01.168937253 +0000 UTC m=+0.116244675 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:25:01 compute-0 openstack_network_exporter[207830]: ERROR   16:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:25:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:25:01 compute-0 openstack_network_exporter[207830]: ERROR   16:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:25:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:25:04 compute-0 nova_compute[188703]: 2026-02-24 16:25:04.058 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:05 compute-0 nova_compute[188703]: 2026-02-24 16:25:05.433 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:07 compute-0 podman[260110]: 2026-02-24 16:25:07.136635249 +0000 UTC m=+0.086464714 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:25:09 compute-0 nova_compute[188703]: 2026-02-24 16:25:09.062 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:10 compute-0 nova_compute[188703]: 2026-02-24 16:25:10.438 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:12 compute-0 sshd-session[260132]: Invalid user solana from 45.148.10.240 port 33418
Feb 24 16:25:12 compute-0 sshd-session[260132]: Connection closed by invalid user solana 45.148.10.240 port 33418 [preauth]
Feb 24 16:25:14 compute-0 nova_compute[188703]: 2026-02-24 16:25:14.062 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:15 compute-0 podman[260134]: 2026-02-24 16:25:15.154881762 +0000 UTC m=+0.102118585 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:25:15 compute-0 podman[260135]: 2026-02-24 16:25:15.161525291 +0000 UTC m=+0.107723856 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 24 16:25:15 compute-0 nova_compute[188703]: 2026-02-24 16:25:15.441 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:17 compute-0 nova_compute[188703]: 2026-02-24 16:25:17.102 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:18 compute-0 nova_compute[188703]: 2026-02-24 16:25:18.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:19 compute-0 nova_compute[188703]: 2026-02-24 16:25:19.066 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:20 compute-0 podman[260180]: 2026-02-24 16:25:20.144480273 +0000 UTC m=+0.099462034 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 16:25:20 compute-0 podman[260179]: 2026-02-24 16:25:20.159314741 +0000 UTC m=+0.114184789 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler)
Feb 24 16:25:20 compute-0 nova_compute[188703]: 2026-02-24 16:25:20.445 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:21 compute-0 nova_compute[188703]: 2026-02-24 16:25:21.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:21 compute-0 nova_compute[188703]: 2026-02-24 16:25:21.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:25:24 compute-0 nova_compute[188703]: 2026-02-24 16:25:24.068 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:25 compute-0 nova_compute[188703]: 2026-02-24 16:25:25.449 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:25 compute-0 nova_compute[188703]: 2026-02-24 16:25:25.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:25 compute-0 nova_compute[188703]: 2026-02-24 16:25:25.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:26 compute-0 podman[260218]: 2026-02-24 16:25:26.13823664 +0000 UTC m=+0.096965098 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 24 16:25:26 compute-0 nova_compute[188703]: 2026-02-24 16:25:26.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:26 compute-0 nova_compute[188703]: 2026-02-24 16:25:26.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:25:26 compute-0 nova_compute[188703]: 2026-02-24 16:25:26.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:25:26 compute-0 nova_compute[188703]: 2026-02-24 16:25:26.960 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:25:26 compute-0 nova_compute[188703]: 2026-02-24 16:25:26.961 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:27 compute-0 nova_compute[188703]: 2026-02-24 16:25:27.956 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:29 compute-0 nova_compute[188703]: 2026-02-24 16:25:29.072 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:29 compute-0 podman[204685]: time="2026-02-24T16:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:25:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:25:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Feb 24 16:25:30 compute-0 nova_compute[188703]: 2026-02-24 16:25:30.453 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:31 compute-0 openstack_network_exporter[207830]: ERROR   16:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:25:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:25:31 compute-0 openstack_network_exporter[207830]: ERROR   16:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:25:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:25:31 compute-0 nova_compute[188703]: 2026-02-24 16:25:31.937 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:32 compute-0 podman[260239]: 2026-02-24 16:25:32.161017047 +0000 UTC m=+0.109080032 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:25:32 compute-0 podman[260240]: 2026-02-24 16:25:32.253639216 +0000 UTC m=+0.194985671 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 16:25:32 compute-0 nova_compute[188703]: 2026-02-24 16:25:32.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:25:32 compute-0 nova_compute[188703]: 2026-02-24 16:25:32.979 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:25:32 compute-0 nova_compute[188703]: 2026-02-24 16:25:32.981 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:25:32 compute-0 nova_compute[188703]: 2026-02-24 16:25:32.982 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:25:32 compute-0 nova_compute[188703]: 2026-02-24 16:25:32.982 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:25:33 compute-0 ovn_controller[98701]: 2026-02-24T16:25:33Z|00182|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.477 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.480 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5331MB free_disk=72.15745544433594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.480 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.481 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.551 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.552 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.584 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.607 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.651 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:25:33 compute-0 nova_compute[188703]: 2026-02-24 16:25:33.652 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:25:34 compute-0 nova_compute[188703]: 2026-02-24 16:25:34.072 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:35 compute-0 nova_compute[188703]: 2026-02-24 16:25:35.457 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:38 compute-0 podman[260284]: 2026-02-24 16:25:38.174228896 +0000 UTC m=+0.124517647 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:25:39 compute-0 nova_compute[188703]: 2026-02-24 16:25:39.077 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.845 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.846 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601ed52e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.863 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.863 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.863 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.863 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:25:39.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:25:40 compute-0 nova_compute[188703]: 2026-02-24 16:25:40.462 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:42 compute-0 sshd-session[260310]: Connection closed by authenticating user root 64.236.161.24 port 46112 [preauth]
Feb 24 16:25:44 compute-0 nova_compute[188703]: 2026-02-24 16:25:44.078 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:45 compute-0 nova_compute[188703]: 2026-02-24 16:25:45.465 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:46 compute-0 podman[260313]: 2026-02-24 16:25:46.159823671 +0000 UTC m=+0.104278713 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:25:46 compute-0 podman[260312]: 2026-02-24 16:25:46.163629374 +0000 UTC m=+0.111348384 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:25:49 compute-0 nova_compute[188703]: 2026-02-24 16:25:49.081 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:50 compute-0 nova_compute[188703]: 2026-02-24 16:25:50.468 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:51 compute-0 podman[260355]: 2026-02-24 16:25:51.12291033 +0000 UTC m=+0.082249692 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=)
Feb 24 16:25:51 compute-0 podman[260356]: 2026-02-24 16:25:51.169491332 +0000 UTC m=+0.120567672 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Feb 24 16:25:54 compute-0 nova_compute[188703]: 2026-02-24 16:25:54.085 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:55 compute-0 nova_compute[188703]: 2026-02-24 16:25:55.470 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:25:55.760 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:25:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:25:55.760 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:25:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:25:55.760 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:25:57 compute-0 podman[260392]: 2026-02-24 16:25:57.169449084 +0000 UTC m=+0.114413935 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:25:59 compute-0 nova_compute[188703]: 2026-02-24 16:25:59.087 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:25:59 compute-0 podman[204685]: time="2026-02-24T16:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:25:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:25:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3917 "" "Go-http-client/1.1"
Feb 24 16:26:00 compute-0 nova_compute[188703]: 2026-02-24 16:26:00.473 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:01 compute-0 openstack_network_exporter[207830]: ERROR   16:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:26:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:26:01 compute-0 openstack_network_exporter[207830]: ERROR   16:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:26:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:26:01 compute-0 sshd-session[260412]: Accepted publickey for zuul from 192.168.122.10 port 53218 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 16:26:01 compute-0 systemd-logind[813]: New session 31 of user zuul.
Feb 24 16:26:01 compute-0 systemd[1]: Started Session 31 of User zuul.
Feb 24 16:26:01 compute-0 sshd-session[260412]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 16:26:02 compute-0 sudo[260416]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 24 16:26:02 compute-0 sudo[260416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 16:26:02 compute-0 podman[260450]: 2026-02-24 16:26:02.385597764 +0000 UTC m=+0.124102516 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:26:02 compute-0 podman[260468]: 2026-02-24 16:26:02.985876227 +0000 UTC m=+0.171320136 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.license=GPLv2)
Feb 24 16:26:04 compute-0 nova_compute[188703]: 2026-02-24 16:26:04.089 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:05 compute-0 nova_compute[188703]: 2026-02-24 16:26:05.478 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:07 compute-0 ovs-vsctl[260629]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 24 16:26:07 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 260440 (sos)
Feb 24 16:26:07 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 24 16:26:07 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 24 16:26:08 compute-0 virtqemud[187820]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 24 16:26:08 compute-0 virtqemud[187820]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 24 16:26:08 compute-0 virtqemud[187820]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 24 16:26:08 compute-0 podman[260834]: 2026-02-24 16:26:08.937604715 +0000 UTC m=+0.113278646 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:26:09 compute-0 nova_compute[188703]: 2026-02-24 16:26:09.091 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:09 compute-0 crontab[261070]: (root) LIST (root)
Feb 24 16:26:10 compute-0 nova_compute[188703]: 2026-02-24 16:26:10.481 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:11 compute-0 kernel: /proc/cgroups lists only v1 controllers, use cgroup.controllers of root cgroup for v2 info
Feb 24 16:26:11 compute-0 systemd[1]: Starting Hostname Service...
Feb 24 16:26:11 compute-0 systemd[1]: Started Hostname Service.
Feb 24 16:26:14 compute-0 nova_compute[188703]: 2026-02-24 16:26:14.094 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:15 compute-0 nova_compute[188703]: 2026-02-24 16:26:15.484 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:17 compute-0 podman[261541]: 2026-02-24 16:26:17.159328636 +0000 UTC m=+0.110015407 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 24 16:26:17 compute-0 podman[261539]: 2026-02-24 16:26:17.159486641 +0000 UTC m=+0.109651829 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:26:17 compute-0 nova_compute[188703]: 2026-02-24 16:26:17.652 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:26:18 compute-0 nova_compute[188703]: 2026-02-24 16:26:18.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:26:19 compute-0 nova_compute[188703]: 2026-02-24 16:26:19.096 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:20 compute-0 nova_compute[188703]: 2026-02-24 16:26:20.487 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:22 compute-0 podman[262070]: 2026-02-24 16:26:22.144014485 +0000 UTC m=+0.099456373 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9)
Feb 24 16:26:22 compute-0 podman[262076]: 2026-02-24 16:26:22.161676029 +0000 UTC m=+0.106759979 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 24 16:26:23 compute-0 ovs-appctl[262670]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 24 16:26:23 compute-0 ovs-appctl[262675]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 24 16:26:23 compute-0 ovs-appctl[262678]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 24 16:26:23 compute-0 nova_compute[188703]: 2026-02-24 16:26:23.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:26:23 compute-0 nova_compute[188703]: 2026-02-24 16:26:23.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:26:24 compute-0 nova_compute[188703]: 2026-02-24 16:26:24.097 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:25 compute-0 nova_compute[188703]: 2026-02-24 16:26:25.490 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:25 compute-0 nova_compute[188703]: 2026-02-24 16:26:25.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:26:26 compute-0 nova_compute[188703]: 2026-02-24 16:26:26.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:26:27 compute-0 nova_compute[188703]: 2026-02-24 16:26:27.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:26:27 compute-0 nova_compute[188703]: 2026-02-24 16:26:27.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:26:27 compute-0 nova_compute[188703]: 2026-02-24 16:26:27.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:26:28 compute-0 nova_compute[188703]: 2026-02-24 16:26:28.001 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:26:28 compute-0 podman[263484]: 2026-02-24 16:26:28.191037633 +0000 UTC m=+0.140085896 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, container_name=openstack_network_exporter, version=9.7, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Feb 24 16:26:28 compute-0 nova_compute[188703]: 2026-02-24 16:26:28.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:26:29 compute-0 nova_compute[188703]: 2026-02-24 16:26:29.099 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:29 compute-0 podman[204685]: time="2026-02-24T16:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:26:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:26:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Feb 24 16:26:29 compute-0 nova_compute[188703]: 2026-02-24 16:26:29.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:26:30 compute-0 nova_compute[188703]: 2026-02-24 16:26:30.492 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:31 compute-0 openstack_network_exporter[207830]: ERROR   16:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:26:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:26:31 compute-0 openstack_network_exporter[207830]: ERROR   16:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:26:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:26:32 compute-0 podman[263567]: 2026-02-24 16:26:32.970245 +0000 UTC m=+0.153020643 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 16:26:33 compute-0 podman[263597]: 2026-02-24 16:26:33.178769504 +0000 UTC m=+0.147379982 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 24 16:26:33 compute-0 virtqemud[187820]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 24 16:26:33 compute-0 nova_compute[188703]: 2026-02-24 16:26:33.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:26:33 compute-0 nova_compute[188703]: 2026-02-24 16:26:33.969 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:26:33 compute-0 nova_compute[188703]: 2026-02-24 16:26:33.969 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:26:33 compute-0 nova_compute[188703]: 2026-02-24 16:26:33.970 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:26:33 compute-0 nova_compute[188703]: 2026-02-24 16:26:33.970 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.101 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.331 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.332 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5078MB free_disk=71.64392471313477GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.333 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.333 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.412 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.412 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.432 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.459 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.459 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.479 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.511 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 16:26:34 compute-0 systemd[1]: Starting Time & Date Service...
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.551 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.586 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.630 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:26:34 compute-0 nova_compute[188703]: 2026-02-24 16:26:34.631 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:26:34 compute-0 systemd[1]: Started Time & Date Service.
Feb 24 16:26:35 compute-0 nova_compute[188703]: 2026-02-24 16:26:35.496 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:39 compute-0 nova_compute[188703]: 2026-02-24 16:26:39.103 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:39 compute-0 podman[264038]: 2026-02-24 16:26:39.15799353 +0000 UTC m=+0.114977731 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:26:40 compute-0 nova_compute[188703]: 2026-02-24 16:26:40.502 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:44 compute-0 nova_compute[188703]: 2026-02-24 16:26:44.108 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:45 compute-0 nova_compute[188703]: 2026-02-24 16:26:45.507 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:47 compute-0 podman[264064]: 2026-02-24 16:26:47.722422252 +0000 UTC m=+0.122062231 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 16:26:47 compute-0 podman[264065]: 2026-02-24 16:26:47.727257822 +0000 UTC m=+0.119969635 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 24 16:26:49 compute-0 nova_compute[188703]: 2026-02-24 16:26:49.110 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:50 compute-0 nova_compute[188703]: 2026-02-24 16:26:50.518 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:53 compute-0 sudo[260416]: pam_unix(sudo:session): session closed for user root
Feb 24 16:26:53 compute-0 sshd-session[260415]: Received disconnect from 192.168.122.10 port 53218:11: disconnected by user
Feb 24 16:26:53 compute-0 sshd-session[260415]: Disconnected from user zuul 192.168.122.10 port 53218
Feb 24 16:26:53 compute-0 sshd-session[260412]: pam_unix(sshd:session): session closed for user zuul
Feb 24 16:26:53 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Feb 24 16:26:53 compute-0 systemd[1]: session-31.scope: Consumed 1min 43.344s CPU time, 772.1M memory peak, read 380.1M from disk, written 18.2M to disk.
Feb 24 16:26:53 compute-0 systemd-logind[813]: Session 31 logged out. Waiting for processes to exit.
Feb 24 16:26:53 compute-0 systemd-logind[813]: Removed session 31.
Feb 24 16:26:53 compute-0 podman[264108]: 2026-02-24 16:26:53.157342551 +0000 UTC m=+0.103127382 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:26:53 compute-0 podman[264107]: 2026-02-24 16:26:53.159107969 +0000 UTC m=+0.120389607 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, config_id=kepler, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.component=ubi9-container)
Feb 24 16:26:53 compute-0 sshd-session[264114]: Accepted publickey for zuul from 192.168.122.10 port 55346 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 16:26:53 compute-0 systemd-logind[813]: New session 32 of user zuul.
Feb 24 16:26:53 compute-0 systemd[1]: Started Session 32 of User zuul.
Feb 24 16:26:53 compute-0 sshd-session[264114]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 16:26:53 compute-0 sudo[264147]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-02-24-xvyqxpm.tar.xz
Feb 24 16:26:53 compute-0 sudo[264147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 16:26:53 compute-0 sudo[264147]: pam_unix(sudo:session): session closed for user root
Feb 24 16:26:53 compute-0 sshd-session[264146]: Received disconnect from 192.168.122.10 port 55346:11: disconnected by user
Feb 24 16:26:53 compute-0 sshd-session[264146]: Disconnected from user zuul 192.168.122.10 port 55346
Feb 24 16:26:53 compute-0 sshd-session[264114]: pam_unix(sshd:session): session closed for user zuul
Feb 24 16:26:53 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Feb 24 16:26:53 compute-0 systemd-logind[813]: Session 32 logged out. Waiting for processes to exit.
Feb 24 16:26:53 compute-0 systemd-logind[813]: Removed session 32.
Feb 24 16:26:53 compute-0 sshd-session[264172]: Accepted publickey for zuul from 192.168.122.10 port 55348 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 16:26:54 compute-0 systemd-logind[813]: New session 33 of user zuul.
Feb 24 16:26:54 compute-0 systemd[1]: Started Session 33 of User zuul.
Feb 24 16:26:54 compute-0 sshd-session[264172]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 16:26:54 compute-0 nova_compute[188703]: 2026-02-24 16:26:54.115 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:54 compute-0 sudo[264176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Feb 24 16:26:54 compute-0 sudo[264176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 16:26:54 compute-0 sudo[264176]: pam_unix(sudo:session): session closed for user root
Feb 24 16:26:54 compute-0 sshd-session[264175]: Received disconnect from 192.168.122.10 port 55348:11: disconnected by user
Feb 24 16:26:54 compute-0 sshd-session[264175]: Disconnected from user zuul 192.168.122.10 port 55348
Feb 24 16:26:54 compute-0 sshd-session[264172]: pam_unix(sshd:session): session closed for user zuul
Feb 24 16:26:54 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Feb 24 16:26:54 compute-0 systemd-logind[813]: Session 33 logged out. Waiting for processes to exit.
Feb 24 16:26:54 compute-0 systemd-logind[813]: Removed session 33.
Feb 24 16:26:55 compute-0 nova_compute[188703]: 2026-02-24 16:26:55.523 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:26:55.761 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:26:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:26:55.762 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:26:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:26:55.762 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:26:59 compute-0 nova_compute[188703]: 2026-02-24 16:26:59.118 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:26:59 compute-0 podman[264201]: 2026-02-24 16:26:59.143367941 +0000 UTC m=+0.097339558 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, distribution-scope=public, release=1770267347, io.openshift.expose-services=, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 16:26:59 compute-0 podman[204685]: time="2026-02-24T16:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:26:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:26:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3916 "" "Go-http-client/1.1"
Feb 24 16:27:00 compute-0 nova_compute[188703]: 2026-02-24 16:27:00.527 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:01 compute-0 openstack_network_exporter[207830]: ERROR   16:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:27:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:27:01 compute-0 openstack_network_exporter[207830]: ERROR   16:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:27:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:27:03 compute-0 podman[264222]: 2026-02-24 16:27:03.160022085 +0000 UTC m=+0.117275963 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:27:04 compute-0 nova_compute[188703]: 2026-02-24 16:27:04.122 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:04 compute-0 podman[264241]: 2026-02-24 16:27:04.225732985 +0000 UTC m=+0.174524832 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 16:27:04 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 24 16:27:04 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 24 16:27:05 compute-0 nova_compute[188703]: 2026-02-24 16:27:05.531 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:09 compute-0 nova_compute[188703]: 2026-02-24 16:27:09.125 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:10 compute-0 podman[264271]: 2026-02-24 16:27:10.125432174 +0000 UTC m=+0.080812602 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:27:10 compute-0 nova_compute[188703]: 2026-02-24 16:27:10.534 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:14 compute-0 nova_compute[188703]: 2026-02-24 16:27:14.128 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:15 compute-0 nova_compute[188703]: 2026-02-24 16:27:15.538 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:17 compute-0 nova_compute[188703]: 2026-02-24 16:27:17.631 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:18 compute-0 podman[264296]: 2026-02-24 16:27:18.133767812 +0000 UTC m=+0.082362855 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 16:27:18 compute-0 podman[264297]: 2026-02-24 16:27:18.16829246 +0000 UTC m=+0.113429039 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:27:18 compute-0 nova_compute[188703]: 2026-02-24 16:27:18.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:19 compute-0 nova_compute[188703]: 2026-02-24 16:27:19.131 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:20 compute-0 nova_compute[188703]: 2026-02-24 16:27:20.541 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:24 compute-0 podman[264339]: 2026-02-24 16:27:24.135896243 +0000 UTC m=+0.095244391 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=kepler, name=ubi9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public)
Feb 24 16:27:24 compute-0 nova_compute[188703]: 2026-02-24 16:27:24.136 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:24 compute-0 podman[264340]: 2026-02-24 16:27:24.160839723 +0000 UTC m=+0.108417025 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Feb 24 16:27:25 compute-0 nova_compute[188703]: 2026-02-24 16:27:25.544 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:25 compute-0 nova_compute[188703]: 2026-02-24 16:27:25.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:25 compute-0 nova_compute[188703]: 2026-02-24 16:27:25.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:25 compute-0 nova_compute[188703]: 2026-02-24 16:27:25.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:27:26 compute-0 nova_compute[188703]: 2026-02-24 16:27:26.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:27 compute-0 nova_compute[188703]: 2026-02-24 16:27:27.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:27 compute-0 nova_compute[188703]: 2026-02-24 16:27:27.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:27:27 compute-0 nova_compute[188703]: 2026-02-24 16:27:27.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:27:27 compute-0 nova_compute[188703]: 2026-02-24 16:27:27.964 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:27:28 compute-0 nova_compute[188703]: 2026-02-24 16:27:28.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:29 compute-0 nova_compute[188703]: 2026-02-24 16:27:29.141 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:29 compute-0 podman[204685]: time="2026-02-24T16:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:27:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:27:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3918 "" "Go-http-client/1.1"
Feb 24 16:27:30 compute-0 podman[264377]: 2026-02-24 16:27:30.133848212 +0000 UTC m=+0.090163465 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, version=9.7, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Feb 24 16:27:30 compute-0 nova_compute[188703]: 2026-02-24 16:27:30.548 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:30 compute-0 nova_compute[188703]: 2026-02-24 16:27:30.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:31 compute-0 openstack_network_exporter[207830]: ERROR   16:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:27:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:27:31 compute-0 openstack_network_exporter[207830]: ERROR   16:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:27:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:27:31 compute-0 nova_compute[188703]: 2026-02-24 16:27:31.951 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:33 compute-0 nova_compute[188703]: 2026-02-24 16:27:33.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:33 compute-0 nova_compute[188703]: 2026-02-24 16:27:33.974 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:27:33 compute-0 nova_compute[188703]: 2026-02-24 16:27:33.975 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:27:33 compute-0 nova_compute[188703]: 2026-02-24 16:27:33.975 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:27:33 compute-0 nova_compute[188703]: 2026-02-24 16:27:33.975 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:27:34 compute-0 podman[264397]: 2026-02-24 16:27:34.116653585 +0000 UTC m=+0.079088736 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.144 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.304 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.305 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5279MB free_disk=72.15718078613281GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.306 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.306 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.666 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.667 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.772 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.794 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.813 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:27:34 compute-0 nova_compute[188703]: 2026-02-24 16:27:34.814 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:27:35 compute-0 podman[264418]: 2026-02-24 16:27:35.142612287 +0000 UTC m=+0.103924733 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 24 16:27:35 compute-0 nova_compute[188703]: 2026-02-24 16:27:35.551 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:35 compute-0 nova_compute[188703]: 2026-02-24 16:27:35.810 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:36 compute-0 nova_compute[188703]: 2026-02-24 16:27:36.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:36 compute-0 nova_compute[188703]: 2026-02-24 16:27:36.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 16:27:39 compute-0 nova_compute[188703]: 2026-02-24 16:27:39.147 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.845 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.846 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2604d94110>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:27:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:27:39 compute-0 nova_compute[188703]: 2026-02-24 16:27:39.962 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:27:39 compute-0 nova_compute[188703]: 2026-02-24 16:27:39.963 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 16:27:39 compute-0 nova_compute[188703]: 2026-02-24 16:27:39.987 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 16:27:40 compute-0 nova_compute[188703]: 2026-02-24 16:27:40.555 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:41 compute-0 podman[264446]: 2026-02-24 16:27:41.17415514 +0000 UTC m=+0.122242326 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:27:44 compute-0 nova_compute[188703]: 2026-02-24 16:27:44.150 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:45 compute-0 nova_compute[188703]: 2026-02-24 16:27:45.559 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:49 compute-0 podman[264470]: 2026-02-24 16:27:49.14663005 +0000 UTC m=+0.096246938 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 16:27:49 compute-0 nova_compute[188703]: 2026-02-24 16:27:49.153 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:49 compute-0 podman[264471]: 2026-02-24 16:27:49.157389079 +0000 UTC m=+0.112908155 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:27:50 compute-0 nova_compute[188703]: 2026-02-24 16:27:50.563 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:54 compute-0 nova_compute[188703]: 2026-02-24 16:27:54.157 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:55 compute-0 podman[264514]: 2026-02-24 16:27:55.180635609 +0000 UTC m=+0.124491196 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 24 16:27:55 compute-0 podman[264513]: 2026-02-24 16:27:55.191226514 +0000 UTC m=+0.141503164 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public)
Feb 24 16:27:55 compute-0 nova_compute[188703]: 2026-02-24 16:27:55.567 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:27:55.762 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:27:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:27:55.762 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:27:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:27:55.763 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:27:58 compute-0 sshd-session[264550]: Invalid user sol from 45.148.10.240 port 45108
Feb 24 16:27:58 compute-0 sshd-session[264550]: Connection closed by invalid user sol 45.148.10.240 port 45108 [preauth]
Feb 24 16:27:58 compute-0 sshd-session[264552]: Connection closed by authenticating user root 172.214.45.193 port 24584 [preauth]
Feb 24 16:27:59 compute-0 nova_compute[188703]: 2026-02-24 16:27:59.160 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:27:59 compute-0 podman[204685]: time="2026-02-24T16:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:27:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:27:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Feb 24 16:28:00 compute-0 nova_compute[188703]: 2026-02-24 16:28:00.572 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:01 compute-0 podman[264554]: 2026-02-24 16:28:01.169524586 +0000 UTC m=+0.116568124 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, container_name=openstack_network_exporter, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 16:28:01 compute-0 openstack_network_exporter[207830]: ERROR   16:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:28:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:28:01 compute-0 openstack_network_exporter[207830]: ERROR   16:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:28:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:28:04 compute-0 nova_compute[188703]: 2026-02-24 16:28:04.164 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:05 compute-0 podman[264573]: 2026-02-24 16:28:05.177761664 +0000 UTC m=+0.133577311 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 24 16:28:05 compute-0 podman[264592]: 2026-02-24 16:28:05.354420362 +0000 UTC m=+0.133880429 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:28:05 compute-0 nova_compute[188703]: 2026-02-24 16:28:05.576 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:09 compute-0 nova_compute[188703]: 2026-02-24 16:28:09.167 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:10 compute-0 nova_compute[188703]: 2026-02-24 16:28:10.580 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:12 compute-0 podman[264618]: 2026-02-24 16:28:12.143690238 +0000 UTC m=+0.099879665 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:28:14 compute-0 nova_compute[188703]: 2026-02-24 16:28:14.171 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:15 compute-0 nova_compute[188703]: 2026-02-24 16:28:15.584 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:18 compute-0 nova_compute[188703]: 2026-02-24 16:28:18.975 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:19 compute-0 nova_compute[188703]: 2026-02-24 16:28:19.175 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:20 compute-0 podman[264644]: 2026-02-24 16:28:20.133697251 +0000 UTC m=+0.079467757 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:28:20 compute-0 podman[264645]: 2026-02-24 16:28:20.135289984 +0000 UTC m=+0.085426227 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Feb 24 16:28:20 compute-0 nova_compute[188703]: 2026-02-24 16:28:20.587 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:20 compute-0 nova_compute[188703]: 2026-02-24 16:28:20.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:24 compute-0 nova_compute[188703]: 2026-02-24 16:28:24.178 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:25 compute-0 nova_compute[188703]: 2026-02-24 16:28:25.591 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:26 compute-0 podman[264685]: 2026-02-24 16:28:26.127264643 +0000 UTC m=+0.080497705 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 16:28:26 compute-0 podman[264684]: 2026-02-24 16:28:26.129294007 +0000 UTC m=+0.084334137 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release-0.7.12=)
Feb 24 16:28:26 compute-0 nova_compute[188703]: 2026-02-24 16:28:26.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:26 compute-0 nova_compute[188703]: 2026-02-24 16:28:26.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:26 compute-0 nova_compute[188703]: 2026-02-24 16:28:26.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:28:27 compute-0 nova_compute[188703]: 2026-02-24 16:28:27.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:28 compute-0 nova_compute[188703]: 2026-02-24 16:28:28.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:28 compute-0 nova_compute[188703]: 2026-02-24 16:28:28.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:28:28 compute-0 nova_compute[188703]: 2026-02-24 16:28:28.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:28:28 compute-0 nova_compute[188703]: 2026-02-24 16:28:28.957 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:28:29 compute-0 nova_compute[188703]: 2026-02-24 16:28:29.183 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:29 compute-0 podman[204685]: time="2026-02-24T16:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:28:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:28:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3925 "" "Go-http-client/1.1"
Feb 24 16:28:29 compute-0 nova_compute[188703]: 2026-02-24 16:28:29.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:30 compute-0 nova_compute[188703]: 2026-02-24 16:28:30.595 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:31 compute-0 openstack_network_exporter[207830]: ERROR   16:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:28:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:28:31 compute-0 openstack_network_exporter[207830]: ERROR   16:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:28:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:28:31 compute-0 sshd-session[264720]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 16:28:31 compute-0 nova_compute[188703]: 2026-02-24 16:28:31.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:32 compute-0 podman[264722]: 2026-02-24 16:28:32.140803732 +0000 UTC m=+0.093371501 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1770267347, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7)
Feb 24 16:28:34 compute-0 nova_compute[188703]: 2026-02-24 16:28:34.188 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:34 compute-0 nova_compute[188703]: 2026-02-24 16:28:34.651 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:34 compute-0 nova_compute[188703]: 2026-02-24 16:28:34.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:28:34 compute-0 nova_compute[188703]: 2026-02-24 16:28:34.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:28:34 compute-0 nova_compute[188703]: 2026-02-24 16:28:34.978 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:28:34 compute-0 nova_compute[188703]: 2026-02-24 16:28:34.978 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:28:34 compute-0 nova_compute[188703]: 2026-02-24 16:28:34.979 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.431 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.432 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5297MB free_disk=72.15718078613281GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.432 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.433 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.588 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.589 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.598 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.620 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.640 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.642 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:28:35 compute-0 nova_compute[188703]: 2026-02-24 16:28:35.643 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:28:36 compute-0 podman[264743]: 2026-02-24 16:28:36.187197205 +0000 UTC m=+0.133529360 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825)
Feb 24 16:28:36 compute-0 podman[264744]: 2026-02-24 16:28:36.200657026 +0000 UTC m=+0.140217249 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 16:28:39 compute-0 nova_compute[188703]: 2026-02-24 16:28:39.191 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:40 compute-0 nova_compute[188703]: 2026-02-24 16:28:40.602 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:43 compute-0 podman[264785]: 2026-02-24 16:28:43.149316176 +0000 UTC m=+0.107085379 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:28:44 compute-0 nova_compute[188703]: 2026-02-24 16:28:44.193 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:45 compute-0 nova_compute[188703]: 2026-02-24 16:28:45.607 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:49 compute-0 nova_compute[188703]: 2026-02-24 16:28:49.198 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:50 compute-0 nova_compute[188703]: 2026-02-24 16:28:50.610 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:51 compute-0 podman[264809]: 2026-02-24 16:28:51.13003846 +0000 UTC m=+0.092818675 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:28:51 compute-0 podman[264810]: 2026-02-24 16:28:51.181995686 +0000 UTC m=+0.129672785 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Feb 24 16:28:54 compute-0 nova_compute[188703]: 2026-02-24 16:28:54.201 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:55 compute-0 nova_compute[188703]: 2026-02-24 16:28:55.613 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:28:55.763 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:28:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:28:55.764 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:28:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:28:55.764 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:28:57 compute-0 podman[264850]: 2026-02-24 16:28:57.14256479 +0000 UTC m=+0.089376862 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:28:57 compute-0 podman[264849]: 2026-02-24 16:28:57.182245207 +0000 UTC m=+0.127787616 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, version=9.4, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0)
Feb 24 16:28:59 compute-0 nova_compute[188703]: 2026-02-24 16:28:59.203 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:28:59 compute-0 podman[204685]: time="2026-02-24T16:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:28:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:28:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Feb 24 16:29:00 compute-0 nova_compute[188703]: 2026-02-24 16:29:00.618 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:01 compute-0 openstack_network_exporter[207830]: ERROR   16:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:29:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:29:01 compute-0 openstack_network_exporter[207830]: ERROR   16:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:29:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:29:03 compute-0 podman[264888]: 2026-02-24 16:29:03.180233218 +0000 UTC m=+0.129306317 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, architecture=x86_64, container_name=openstack_network_exporter)
Feb 24 16:29:04 compute-0 nova_compute[188703]: 2026-02-24 16:29:04.206 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:05 compute-0 nova_compute[188703]: 2026-02-24 16:29:05.621 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:07 compute-0 podman[264909]: 2026-02-24 16:29:07.18194283 +0000 UTC m=+0.129186514 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.43.0)
Feb 24 16:29:07 compute-0 podman[264910]: 2026-02-24 16:29:07.214741421 +0000 UTC m=+0.158350036 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 24 16:29:09 compute-0 nova_compute[188703]: 2026-02-24 16:29:09.208 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:10 compute-0 nova_compute[188703]: 2026-02-24 16:29:10.626 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:14 compute-0 podman[264952]: 2026-02-24 16:29:14.125643055 +0000 UTC m=+0.083478925 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:29:14 compute-0 nova_compute[188703]: 2026-02-24 16:29:14.210 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:15 compute-0 nova_compute[188703]: 2026-02-24 16:29:15.630 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:19 compute-0 nova_compute[188703]: 2026-02-24 16:29:19.212 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:20 compute-0 nova_compute[188703]: 2026-02-24 16:29:20.635 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:20 compute-0 nova_compute[188703]: 2026-02-24 16:29:20.643 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:22 compute-0 podman[264977]: 2026-02-24 16:29:22.148246885 +0000 UTC m=+0.090997497 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:29:22 compute-0 podman[264978]: 2026-02-24 16:29:22.165876218 +0000 UTC m=+0.106523603 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 24 16:29:22 compute-0 nova_compute[188703]: 2026-02-24 16:29:22.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:24 compute-0 nova_compute[188703]: 2026-02-24 16:29:24.215 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:25 compute-0 nova_compute[188703]: 2026-02-24 16:29:25.642 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:27 compute-0 nova_compute[188703]: 2026-02-24 16:29:27.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:27 compute-0 nova_compute[188703]: 2026-02-24 16:29:27.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:29:28 compute-0 podman[265017]: 2026-02-24 16:29:28.180109116 +0000 UTC m=+0.119533654 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, version=9.4, name=ubi9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 24 16:29:28 compute-0 podman[265018]: 2026-02-24 16:29:28.185275234 +0000 UTC m=+0.123208832 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:29:28 compute-0 nova_compute[188703]: 2026-02-24 16:29:28.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:28 compute-0 nova_compute[188703]: 2026-02-24 16:29:28.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:29 compute-0 nova_compute[188703]: 2026-02-24 16:29:29.217 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:29 compute-0 podman[204685]: time="2026-02-24T16:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:29:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:29:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3919 "" "Go-http-client/1.1"
Feb 24 16:29:29 compute-0 nova_compute[188703]: 2026-02-24 16:29:29.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:29 compute-0 nova_compute[188703]: 2026-02-24 16:29:29.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:29:29 compute-0 nova_compute[188703]: 2026-02-24 16:29:29.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:29:29 compute-0 nova_compute[188703]: 2026-02-24 16:29:29.965 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:29:29 compute-0 nova_compute[188703]: 2026-02-24 16:29:29.966 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:30 compute-0 nova_compute[188703]: 2026-02-24 16:29:30.648 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:31 compute-0 openstack_network_exporter[207830]: ERROR   16:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:29:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:29:31 compute-0 openstack_network_exporter[207830]: ERROR   16:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:29:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:29:31 compute-0 nova_compute[188703]: 2026-02-24 16:29:31.962 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:34 compute-0 podman[265058]: 2026-02-24 16:29:34.161591922 +0000 UTC m=+0.099329931 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.7, distribution-scope=public, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=)
Feb 24 16:29:34 compute-0 nova_compute[188703]: 2026-02-24 16:29:34.220 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:35 compute-0 nova_compute[188703]: 2026-02-24 16:29:35.653 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:36 compute-0 nova_compute[188703]: 2026-02-24 16:29:36.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:36 compute-0 nova_compute[188703]: 2026-02-24 16:29:36.972 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:29:36 compute-0 nova_compute[188703]: 2026-02-24 16:29:36.973 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:29:36 compute-0 nova_compute[188703]: 2026-02-24 16:29:36.974 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:29:36 compute-0 nova_compute[188703]: 2026-02-24 16:29:36.976 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.460 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.462 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5300MB free_disk=72.15718078613281GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.463 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.464 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.533 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.534 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.559 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.576 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.579 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:29:37 compute-0 nova_compute[188703]: 2026-02-24 16:29:37.580 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:29:38 compute-0 podman[265077]: 2026-02-24 16:29:38.151581309 +0000 UTC m=+0.100955985 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 24 16:29:38 compute-0 podman[265078]: 2026-02-24 16:29:38.241822164 +0000 UTC m=+0.186533274 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 24 16:29:39 compute-0 nova_compute[188703]: 2026-02-24 16:29:39.223 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:39 compute-0 nova_compute[188703]: 2026-02-24 16:29:39.577 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.847 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.848 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.864 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f26028ec8c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:29:39.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:29:40 compute-0 nova_compute[188703]: 2026-02-24 16:29:40.658 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:44 compute-0 nova_compute[188703]: 2026-02-24 16:29:44.226 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:44 compute-0 podman[265124]: 2026-02-24 16:29:44.781491662 +0000 UTC m=+0.099951368 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:29:45 compute-0 nova_compute[188703]: 2026-02-24 16:29:45.662 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:49 compute-0 nova_compute[188703]: 2026-02-24 16:29:49.228 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:50 compute-0 nova_compute[188703]: 2026-02-24 16:29:50.667 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:53 compute-0 podman[265150]: 2026-02-24 16:29:53.154356086 +0000 UTC m=+0.098558561 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 24 16:29:53 compute-0 podman[265149]: 2026-02-24 16:29:53.169715769 +0000 UTC m=+0.121105447 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:29:54 compute-0 nova_compute[188703]: 2026-02-24 16:29:54.232 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:55 compute-0 nova_compute[188703]: 2026-02-24 16:29:55.672 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:29:55.766 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:29:55.767 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:29:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:29:55.767 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:29:59 compute-0 podman[265193]: 2026-02-24 16:29:59.148795381 +0000 UTC m=+0.103273357 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.openshift.expose-services=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, config_id=kepler)
Feb 24 16:29:59 compute-0 podman[265194]: 2026-02-24 16:29:59.191014405 +0000 UTC m=+0.134772493 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:29:59 compute-0 nova_compute[188703]: 2026-02-24 16:29:59.233 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:29:59 compute-0 podman[204685]: time="2026-02-24T16:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:29:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:29:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3918 "" "Go-http-client/1.1"
Feb 24 16:30:00 compute-0 nova_compute[188703]: 2026-02-24 16:30:00.675 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:01 compute-0 openstack_network_exporter[207830]: ERROR   16:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:30:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:30:01 compute-0 openstack_network_exporter[207830]: ERROR   16:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:30:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:30:04 compute-0 nova_compute[188703]: 2026-02-24 16:30:04.238 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:05 compute-0 podman[265231]: 2026-02-24 16:30:05.193241992 +0000 UTC m=+0.131766972 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, version=9.7, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1770267347, architecture=x86_64, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 24 16:30:05 compute-0 nova_compute[188703]: 2026-02-24 16:30:05.679 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:09 compute-0 podman[265253]: 2026-02-24 16:30:09.158518474 +0000 UTC m=+0.114364575 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825)
Feb 24 16:30:09 compute-0 podman[265254]: 2026-02-24 16:30:09.212282448 +0000 UTC m=+0.165940450 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0)
Feb 24 16:30:09 compute-0 nova_compute[188703]: 2026-02-24 16:30:09.239 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:10 compute-0 nova_compute[188703]: 2026-02-24 16:30:10.684 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:14 compute-0 nova_compute[188703]: 2026-02-24 16:30:14.246 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:15 compute-0 podman[265296]: 2026-02-24 16:30:15.144699947 +0000 UTC m=+0.094444158 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:30:15 compute-0 nova_compute[188703]: 2026-02-24 16:30:15.688 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:19 compute-0 nova_compute[188703]: 2026-02-24 16:30:19.248 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:20 compute-0 nova_compute[188703]: 2026-02-24 16:30:20.692 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:20 compute-0 nova_compute[188703]: 2026-02-24 16:30:20.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:30:22 compute-0 nova_compute[188703]: 2026-02-24 16:30:22.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:30:24 compute-0 podman[265320]: 2026-02-24 16:30:24.170262811 +0000 UTC m=+0.121995940 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:30:24 compute-0 podman[265321]: 2026-02-24 16:30:24.168824623 +0000 UTC m=+0.115985129 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 24 16:30:24 compute-0 nova_compute[188703]: 2026-02-24 16:30:24.250 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:25 compute-0 nova_compute[188703]: 2026-02-24 16:30:25.696 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:27 compute-0 nova_compute[188703]: 2026-02-24 16:30:27.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:30:27 compute-0 nova_compute[188703]: 2026-02-24 16:30:27.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:30:29 compute-0 nova_compute[188703]: 2026-02-24 16:30:29.252 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:29 compute-0 podman[204685]: time="2026-02-24T16:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:30:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:30:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3920 "" "Go-http-client/1.1"
Feb 24 16:30:29 compute-0 nova_compute[188703]: 2026-02-24 16:30:29.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:30:29 compute-0 nova_compute[188703]: 2026-02-24 16:30:29.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:30:29 compute-0 nova_compute[188703]: 2026-02-24 16:30:29.946 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:30:29 compute-0 nova_compute[188703]: 2026-02-24 16:30:29.964 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:30:29 compute-0 nova_compute[188703]: 2026-02-24 16:30:29.965 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:30:29 compute-0 nova_compute[188703]: 2026-02-24 16:30:29.966 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:30:30 compute-0 podman[265360]: 2026-02-24 16:30:30.134466772 +0000 UTC m=+0.095340553 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release=1214.1726694543, config_id=kepler, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0)
Feb 24 16:30:30 compute-0 podman[265361]: 2026-02-24 16:30:30.204946977 +0000 UTC m=+0.154037200 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 24 16:30:30 compute-0 nova_compute[188703]: 2026-02-24 16:30:30.699 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:30 compute-0 nova_compute[188703]: 2026-02-24 16:30:30.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:30:31 compute-0 openstack_network_exporter[207830]: ERROR   16:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:30:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:30:31 compute-0 openstack_network_exporter[207830]: ERROR   16:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:30:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:30:32 compute-0 nova_compute[188703]: 2026-02-24 16:30:32.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:30:34 compute-0 nova_compute[188703]: 2026-02-24 16:30:34.256 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:35 compute-0 nova_compute[188703]: 2026-02-24 16:30:35.702 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:36 compute-0 podman[265399]: 2026-02-24 16:30:36.171662576 +0000 UTC m=+0.124922698 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, distribution-scope=public)
Feb 24 16:30:38 compute-0 nova_compute[188703]: 2026-02-24 16:30:38.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.008 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.008 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.009 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.009 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.258 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.383 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.385 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5292MB free_disk=72.1573257446289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.385 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.386 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.510 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.511 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.540 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.556 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.558 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:30:39 compute-0 nova_compute[188703]: 2026-02-24 16:30:39.559 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:30:40 compute-0 sshd-session[265420]: Invalid user ubuntu from 45.148.10.240 port 34686
Feb 24 16:30:40 compute-0 podman[265422]: 2026-02-24 16:30:40.146140536 +0000 UTC m=+0.099902366 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 24 16:30:40 compute-0 podman[265423]: 2026-02-24 16:30:40.193430687 +0000 UTC m=+0.149595481 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:30:40 compute-0 sshd-session[265420]: Connection closed by invalid user ubuntu 45.148.10.240 port 34686 [preauth]
Feb 24 16:30:40 compute-0 nova_compute[188703]: 2026-02-24 16:30:40.704 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:44 compute-0 nova_compute[188703]: 2026-02-24 16:30:44.263 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:45 compute-0 nova_compute[188703]: 2026-02-24 16:30:45.708 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:46 compute-0 podman[265463]: 2026-02-24 16:30:46.131249069 +0000 UTC m=+0.084999534 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:30:49 compute-0 nova_compute[188703]: 2026-02-24 16:30:49.265 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:50 compute-0 nova_compute[188703]: 2026-02-24 16:30:50.712 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:54 compute-0 nova_compute[188703]: 2026-02-24 16:30:54.268 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:55 compute-0 podman[265488]: 2026-02-24 16:30:55.16786437 +0000 UTC m=+0.112359850 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 24 16:30:55 compute-0 podman[265487]: 2026-02-24 16:30:55.170308856 +0000 UTC m=+0.117906430 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:30:55 compute-0 nova_compute[188703]: 2026-02-24 16:30:55.716 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:30:55.767 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:30:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:30:55.768 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:30:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:30:55.768 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:30:59 compute-0 nova_compute[188703]: 2026-02-24 16:30:59.274 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:30:59 compute-0 podman[204685]: time="2026-02-24T16:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:30:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:30:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3927 "" "Go-http-client/1.1"
Feb 24 16:31:00 compute-0 nova_compute[188703]: 2026-02-24 16:31:00.721 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:01 compute-0 podman[265527]: 2026-02-24 16:31:01.171671147 +0000 UTC m=+0.118692980 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 24 16:31:01 compute-0 podman[265526]: 2026-02-24 16:31:01.193447233 +0000 UTC m=+0.146201250 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=kepler, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 16:31:01 compute-0 openstack_network_exporter[207830]: ERROR   16:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:31:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:31:01 compute-0 openstack_network_exporter[207830]: ERROR   16:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:31:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:31:04 compute-0 nova_compute[188703]: 2026-02-24 16:31:04.278 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:05 compute-0 nova_compute[188703]: 2026-02-24 16:31:05.725 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:07 compute-0 podman[265566]: 2026-02-24 16:31:07.185269166 +0000 UTC m=+0.133349334 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, name=ubi9/ubi-minimal, managed_by=edpm_ansible, config_id=openstack_network_exporter, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:31:09 compute-0 nova_compute[188703]: 2026-02-24 16:31:09.280 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:10 compute-0 nova_compute[188703]: 2026-02-24 16:31:10.730 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:11 compute-0 podman[265584]: 2026-02-24 16:31:11.150006606 +0000 UTC m=+0.108569108 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 24 16:31:11 compute-0 podman[265585]: 2026-02-24 16:31:11.200434081 +0000 UTC m=+0.146389175 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 24 16:31:14 compute-0 nova_compute[188703]: 2026-02-24 16:31:14.287 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:15 compute-0 nova_compute[188703]: 2026-02-24 16:31:15.734 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:17 compute-0 podman[265630]: 2026-02-24 16:31:17.167323257 +0000 UTC m=+0.118587128 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:31:19 compute-0 nova_compute[188703]: 2026-02-24 16:31:19.289 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:20 compute-0 nova_compute[188703]: 2026-02-24 16:31:20.738 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:23 compute-0 nova_compute[188703]: 2026-02-24 16:31:23.560 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:24 compute-0 nova_compute[188703]: 2026-02-24 16:31:24.290 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:24 compute-0 nova_compute[188703]: 2026-02-24 16:31:24.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:25 compute-0 nova_compute[188703]: 2026-02-24 16:31:25.741 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:26 compute-0 podman[265653]: 2026-02-24 16:31:26.197629758 +0000 UTC m=+0.139160101 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:31:26 compute-0 podman[265654]: 2026-02-24 16:31:26.197666309 +0000 UTC m=+0.127159757 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Feb 24 16:31:28 compute-0 nova_compute[188703]: 2026-02-24 16:31:28.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:28 compute-0 nova_compute[188703]: 2026-02-24 16:31:28.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:31:29 compute-0 nova_compute[188703]: 2026-02-24 16:31:29.293 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:29 compute-0 podman[204685]: time="2026-02-24T16:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:31:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:31:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3924 "" "Go-http-client/1.1"
Feb 24 16:31:29 compute-0 nova_compute[188703]: 2026-02-24 16:31:29.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:29 compute-0 nova_compute[188703]: 2026-02-24 16:31:29.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:31:29 compute-0 nova_compute[188703]: 2026-02-24 16:31:29.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:31:29 compute-0 nova_compute[188703]: 2026-02-24 16:31:29.961 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:31:29 compute-0 nova_compute[188703]: 2026-02-24 16:31:29.961 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:30 compute-0 nova_compute[188703]: 2026-02-24 16:31:30.745 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:31 compute-0 openstack_network_exporter[207830]: ERROR   16:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:31:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:31:31 compute-0 openstack_network_exporter[207830]: ERROR   16:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:31:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:31:31 compute-0 nova_compute[188703]: 2026-02-24 16:31:31.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:31 compute-0 nova_compute[188703]: 2026-02-24 16:31:31.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:32 compute-0 podman[265696]: 2026-02-24 16:31:32.168206124 +0000 UTC m=+0.114991202 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, version=9.4, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public)
Feb 24 16:31:32 compute-0 podman[265697]: 2026-02-24 16:31:32.172766876 +0000 UTC m=+0.113375758 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:31:34 compute-0 nova_compute[188703]: 2026-02-24 16:31:34.296 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:34 compute-0 nova_compute[188703]: 2026-02-24 16:31:34.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:35 compute-0 nova_compute[188703]: 2026-02-24 16:31:35.748 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:38 compute-0 podman[265735]: 2026-02-24 16:31:38.175381012 +0000 UTC m=+0.123138400 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1770267347, config_id=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9)
Feb 24 16:31:38 compute-0 nova_compute[188703]: 2026-02-24 16:31:38.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:38 compute-0 nova_compute[188703]: 2026-02-24 16:31:38.986 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:31:38 compute-0 nova_compute[188703]: 2026-02-24 16:31:38.986 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:31:38 compute-0 nova_compute[188703]: 2026-02-24 16:31:38.987 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:31:38 compute-0 nova_compute[188703]: 2026-02-24 16:31:38.987 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.300 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.420 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.422 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5296MB free_disk=72.1573257446289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.422 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.422 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.507 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.507 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.539 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.563 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.564 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.584 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.629 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.651 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.681 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.683 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:31:39 compute-0 nova_compute[188703]: 2026-02-24 16:31:39.684 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.847 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.848 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fea0e0c0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.packets': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'disk.device.write.latency': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:31:39.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:31:40 compute-0 nova_compute[188703]: 2026-02-24 16:31:40.752 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:42 compute-0 podman[265757]: 2026-02-24 16:31:42.127682573 +0000 UTC m=+0.086057356 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0)
Feb 24 16:31:42 compute-0 podman[265758]: 2026-02-24 16:31:42.218221398 +0000 UTC m=+0.166048564 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 16:31:42 compute-0 nova_compute[188703]: 2026-02-24 16:31:42.681 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:31:44 compute-0 nova_compute[188703]: 2026-02-24 16:31:44.302 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:45 compute-0 nova_compute[188703]: 2026-02-24 16:31:45.756 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:48 compute-0 podman[265802]: 2026-02-24 16:31:48.135021189 +0000 UTC m=+0.094258767 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:31:49 compute-0 nova_compute[188703]: 2026-02-24 16:31:49.306 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:49 compute-0 nova_compute[188703]: 2026-02-24 16:31:49.688 188707 DEBUG oslo_concurrency.processutils [None req-6827323d-373c-4f88-90ea-8c48b30bae3e bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 24 16:31:49 compute-0 nova_compute[188703]: 2026-02-24 16:31:49.709 188707 DEBUG oslo_concurrency.processutils [None req-6827323d-373c-4f88-90ea-8c48b30bae3e bd338d866e3242aeb685fec99c451955 4407f5b870e145d8917119ad928717e8 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 24 16:31:50 compute-0 nova_compute[188703]: 2026-02-24 16:31:50.759 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:54 compute-0 nova_compute[188703]: 2026-02-24 16:31:54.308 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:55 compute-0 nova_compute[188703]: 2026-02-24 16:31:55.763 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:31:55.768 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:31:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:31:55.769 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:31:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:31:55.769 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:31:56 compute-0 nova_compute[188703]: 2026-02-24 16:31:56.216 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:31:56.215 108026 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '0a:94:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fe:af:21:47:90:a4'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 24 16:31:56 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:31:56.218 108026 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 24 16:31:57 compute-0 podman[265830]: 2026-02-24 16:31:57.118108747 +0000 UTC m=+0.070379270 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:31:57 compute-0 podman[265829]: 2026-02-24 16:31:57.126665769 +0000 UTC m=+0.076877536 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 16:31:59 compute-0 nova_compute[188703]: 2026-02-24 16:31:59.313 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:31:59 compute-0 podman[204685]: time="2026-02-24T16:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:31:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:31:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3921 "" "Go-http-client/1.1"
Feb 24 16:32:00 compute-0 nova_compute[188703]: 2026-02-24 16:32:00.766 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:01 compute-0 openstack_network_exporter[207830]: ERROR   16:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:32:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:32:01 compute-0 openstack_network_exporter[207830]: ERROR   16:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:32:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:32:03 compute-0 podman[265872]: 2026-02-24 16:32:03.151292285 +0000 UTC m=+0.104539917 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 24 16:32:03 compute-0 podman[265871]: 2026-02-24 16:32:03.167524406 +0000 UTC m=+0.122890475 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, config_id=kepler, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Feb 24 16:32:04 compute-0 nova_compute[188703]: 2026-02-24 16:32:04.322 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:05 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:32:05.222 108026 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ab329b13-e5ce-43e1-b513-c55bd650f251, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 24 16:32:05 compute-0 nova_compute[188703]: 2026-02-24 16:32:05.771 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:09 compute-0 podman[265906]: 2026-02-24 16:32:09.14161735 +0000 UTC m=+0.107164148 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64)
Feb 24 16:32:09 compute-0 nova_compute[188703]: 2026-02-24 16:32:09.324 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:10 compute-0 nova_compute[188703]: 2026-02-24 16:32:10.776 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:13 compute-0 podman[265928]: 2026-02-24 16:32:13.169678166 +0000 UTC m=+0.115691519 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 16:32:13 compute-0 podman[265929]: 2026-02-24 16:32:13.213771502 +0000 UTC m=+0.152788055 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 24 16:32:14 compute-0 nova_compute[188703]: 2026-02-24 16:32:14.326 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:15 compute-0 nova_compute[188703]: 2026-02-24 16:32:15.780 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:19 compute-0 podman[265970]: 2026-02-24 16:32:19.098908454 +0000 UTC m=+0.059077953 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:32:19 compute-0 nova_compute[188703]: 2026-02-24 16:32:19.333 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:20 compute-0 nova_compute[188703]: 2026-02-24 16:32:20.783 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:23 compute-0 nova_compute[188703]: 2026-02-24 16:32:23.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:24 compute-0 nova_compute[188703]: 2026-02-24 16:32:24.334 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:24 compute-0 nova_compute[188703]: 2026-02-24 16:32:24.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:25 compute-0 nova_compute[188703]: 2026-02-24 16:32:25.786 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:28 compute-0 podman[265997]: 2026-02-24 16:32:28.15275362 +0000 UTC m=+0.102418879 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 24 16:32:28 compute-0 podman[265996]: 2026-02-24 16:32:28.165820755 +0000 UTC m=+0.127234002 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 16:32:29 compute-0 nova_compute[188703]: 2026-02-24 16:32:29.335 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:29 compute-0 podman[204685]: time="2026-02-24T16:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:32:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:32:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Feb 24 16:32:29 compute-0 nova_compute[188703]: 2026-02-24 16:32:29.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:29 compute-0 nova_compute[188703]: 2026-02-24 16:32:29.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:29 compute-0 nova_compute[188703]: 2026-02-24 16:32:29.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:32:30 compute-0 nova_compute[188703]: 2026-02-24 16:32:30.789 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:31 compute-0 openstack_network_exporter[207830]: ERROR   16:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:32:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:32:31 compute-0 openstack_network_exporter[207830]: ERROR   16:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:32:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:32:31 compute-0 nova_compute[188703]: 2026-02-24 16:32:31.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:31 compute-0 nova_compute[188703]: 2026-02-24 16:32:31.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:32:31 compute-0 nova_compute[188703]: 2026-02-24 16:32:31.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:32:31 compute-0 nova_compute[188703]: 2026-02-24 16:32:31.968 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:32:31 compute-0 nova_compute[188703]: 2026-02-24 16:32:31.968 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:32 compute-0 nova_compute[188703]: 2026-02-24 16:32:32.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:34 compute-0 podman[266038]: 2026-02-24 16:32:34.14216596 +0000 UTC m=+0.094046122 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, container_name=kepler, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=)
Feb 24 16:32:34 compute-0 podman[266039]: 2026-02-24 16:32:34.160651221 +0000 UTC m=+0.102720357 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Feb 24 16:32:34 compute-0 nova_compute[188703]: 2026-02-24 16:32:34.338 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:35 compute-0 nova_compute[188703]: 2026-02-24 16:32:35.793 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:36 compute-0 nova_compute[188703]: 2026-02-24 16:32:36.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:38 compute-0 nova_compute[188703]: 2026-02-24 16:32:38.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:38 compute-0 nova_compute[188703]: 2026-02-24 16:32:38.982 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:32:38 compute-0 nova_compute[188703]: 2026-02-24 16:32:38.983 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:32:38 compute-0 nova_compute[188703]: 2026-02-24 16:32:38.983 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:32:38 compute-0 nova_compute[188703]: 2026-02-24 16:32:38.983 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.343 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.525 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.527 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5294MB free_disk=72.15734481811523GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.527 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.528 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.710 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.711 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.801 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.823 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.824 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:32:39 compute-0 nova_compute[188703]: 2026-02-24 16:32:39.825 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:32:40 compute-0 podman[266073]: 2026-02-24 16:32:40.146267298 +0000 UTC m=+0.094781772 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, vcs-type=git, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, managed_by=edpm_ansible, release=1770267347, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container)
Feb 24 16:32:40 compute-0 nova_compute[188703]: 2026-02-24 16:32:40.796 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:41 compute-0 nova_compute[188703]: 2026-02-24 16:32:41.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:41 compute-0 nova_compute[188703]: 2026-02-24 16:32:41.942 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 24 16:32:41 compute-0 nova_compute[188703]: 2026-02-24 16:32:41.957 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 24 16:32:43 compute-0 nova_compute[188703]: 2026-02-24 16:32:43.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:44 compute-0 podman[266094]: 2026-02-24 16:32:44.1517338 +0000 UTC m=+0.108018281 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 24 16:32:44 compute-0 podman[266095]: 2026-02-24 16:32:44.199572158 +0000 UTC m=+0.151534461 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:32:44 compute-0 nova_compute[188703]: 2026-02-24 16:32:44.346 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:45 compute-0 nova_compute[188703]: 2026-02-24 16:32:45.801 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:49 compute-0 nova_compute[188703]: 2026-02-24 16:32:49.347 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:50 compute-0 podman[266141]: 2026-02-24 16:32:50.175895456 +0000 UTC m=+0.126330209 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:32:50 compute-0 nova_compute[188703]: 2026-02-24 16:32:50.804 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:50 compute-0 nova_compute[188703]: 2026-02-24 16:32:50.955 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:32:50 compute-0 nova_compute[188703]: 2026-02-24 16:32:50.956 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 24 16:32:54 compute-0 nova_compute[188703]: 2026-02-24 16:32:54.351 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:32:55.769 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:32:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:32:55.770 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:32:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:32:55.770 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:32:55 compute-0 nova_compute[188703]: 2026-02-24 16:32:55.811 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:59 compute-0 podman[266165]: 2026-02-24 16:32:59.148833438 +0000 UTC m=+0.101547265 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 24 16:32:59 compute-0 podman[266164]: 2026-02-24 16:32:59.176471268 +0000 UTC m=+0.130169922 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:32:59 compute-0 nova_compute[188703]: 2026-02-24 16:32:59.354 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:32:59 compute-0 podman[204685]: time="2026-02-24T16:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:32:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:32:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3921 "" "Go-http-client/1.1"
Feb 24 16:33:00 compute-0 nova_compute[188703]: 2026-02-24 16:33:00.815 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:01 compute-0 openstack_network_exporter[207830]: ERROR   16:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:33:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:33:01 compute-0 openstack_network_exporter[207830]: ERROR   16:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:33:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:33:04 compute-0 nova_compute[188703]: 2026-02-24 16:33:04.357 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:05 compute-0 podman[266205]: 2026-02-24 16:33:05.180713009 +0000 UTC m=+0.132371691 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 24 16:33:05 compute-0 podman[266206]: 2026-02-24 16:33:05.19032263 +0000 UTC m=+0.138328734 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 24 16:33:05 compute-0 nova_compute[188703]: 2026-02-24 16:33:05.819 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:09 compute-0 nova_compute[188703]: 2026-02-24 16:33:09.362 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:10 compute-0 sshd-session[266240]: Connection closed by authenticating user root 64.236.161.24 port 46112 [preauth]
Feb 24 16:33:10 compute-0 nova_compute[188703]: 2026-02-24 16:33:10.823 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:11 compute-0 podman[266242]: 2026-02-24 16:33:11.135494209 +0000 UTC m=+0.083723152 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, version=9.7, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1770267347, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z)
Feb 24 16:33:14 compute-0 nova_compute[188703]: 2026-02-24 16:33:14.367 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:14 compute-0 podman[266262]: 2026-02-24 16:33:14.759838133 +0000 UTC m=+0.092400187 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260223, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 16:33:14 compute-0 podman[266263]: 2026-02-24 16:33:14.811648939 +0000 UTC m=+0.136609747 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:33:15 compute-0 nova_compute[188703]: 2026-02-24 16:33:15.827 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:18 compute-0 sshd-session[266306]: Invalid user ubuntu from 45.148.10.240 port 55008
Feb 24 16:33:18 compute-0 sshd-session[266306]: Connection closed by invalid user ubuntu 45.148.10.240 port 55008 [preauth]
Feb 24 16:33:19 compute-0 nova_compute[188703]: 2026-02-24 16:33:19.370 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:20 compute-0 nova_compute[188703]: 2026-02-24 16:33:20.832 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:21 compute-0 podman[266309]: 2026-02-24 16:33:21.146597442 +0000 UTC m=+0.098752249 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:33:24 compute-0 nova_compute[188703]: 2026-02-24 16:33:24.374 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:24 compute-0 nova_compute[188703]: 2026-02-24 16:33:24.963 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:25 compute-0 nova_compute[188703]: 2026-02-24 16:33:25.836 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:25 compute-0 nova_compute[188703]: 2026-02-24 16:33:25.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:29 compute-0 nova_compute[188703]: 2026-02-24 16:33:29.377 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:29 compute-0 podman[204685]: time="2026-02-24T16:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:33:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:33:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3920 "" "Go-http-client/1.1"
Feb 24 16:33:29 compute-0 nova_compute[188703]: 2026-02-24 16:33:29.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:30 compute-0 podman[266334]: 2026-02-24 16:33:30.127359096 +0000 UTC m=+0.079010294 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 24 16:33:30 compute-0 podman[266333]: 2026-02-24 16:33:30.143045413 +0000 UTC m=+0.092763059 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 16:33:30 compute-0 nova_compute[188703]: 2026-02-24 16:33:30.840 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:30 compute-0 nova_compute[188703]: 2026-02-24 16:33:30.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:30 compute-0 nova_compute[188703]: 2026-02-24 16:33:30.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:33:31 compute-0 openstack_network_exporter[207830]: ERROR   16:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:33:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:33:31 compute-0 openstack_network_exporter[207830]: ERROR   16:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:33:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:33:31 compute-0 nova_compute[188703]: 2026-02-24 16:33:31.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:31 compute-0 nova_compute[188703]: 2026-02-24 16:33:31.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:33:31 compute-0 nova_compute[188703]: 2026-02-24 16:33:31.946 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:33:31 compute-0 nova_compute[188703]: 2026-02-24 16:33:31.969 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:33:32 compute-0 nova_compute[188703]: 2026-02-24 16:33:32.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:32 compute-0 nova_compute[188703]: 2026-02-24 16:33:32.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:34 compute-0 sshd-session[266377]: Connection closed by authenticating user root 52.159.244.83 port 2072 [preauth]
Feb 24 16:33:34 compute-0 nova_compute[188703]: 2026-02-24 16:33:34.378 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:35 compute-0 nova_compute[188703]: 2026-02-24 16:33:35.844 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:36 compute-0 podman[266380]: 2026-02-24 16:33:36.146794571 +0000 UTC m=+0.092534992 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Feb 24 16:33:36 compute-0 podman[266379]: 2026-02-24 16:33:36.162907158 +0000 UTC m=+0.112033890 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, container_name=kepler, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 24 16:33:36 compute-0 nova_compute[188703]: 2026-02-24 16:33:36.939 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:38 compute-0 nova_compute[188703]: 2026-02-24 16:33:38.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.004 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.004 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.005 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.005 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.338 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.340 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5288MB free_disk=72.1563491821289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.341 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.342 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.381 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.446 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.447 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.482 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.500 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.502 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:33:39 compute-0 nova_compute[188703]: 2026-02-24 16:33:39.503 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.848 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.849 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f25fece67e0>] with cache [{}], pollster history [{'memory.usage': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:33:39.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:33:40 compute-0 nova_compute[188703]: 2026-02-24 16:33:40.849 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:42 compute-0 podman[266418]: 2026-02-24 16:33:42.185966982 +0000 UTC m=+0.135930088 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vcs-type=git)
Feb 24 16:33:42 compute-0 nova_compute[188703]: 2026-02-24 16:33:42.499 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:33:44 compute-0 nova_compute[188703]: 2026-02-24 16:33:44.385 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:45 compute-0 podman[266439]: 2026-02-24 16:33:45.15427795 +0000 UTC m=+0.106416807 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 16:33:45 compute-0 podman[266440]: 2026-02-24 16:33:45.19334341 +0000 UTC m=+0.144858871 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 16:33:45 compute-0 nova_compute[188703]: 2026-02-24 16:33:45.852 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:49 compute-0 nova_compute[188703]: 2026-02-24 16:33:49.388 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:50 compute-0 nova_compute[188703]: 2026-02-24 16:33:50.857 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:52 compute-0 podman[266486]: 2026-02-24 16:33:52.163388042 +0000 UTC m=+0.117462547 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 24 16:33:54 compute-0 nova_compute[188703]: 2026-02-24 16:33:54.476 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:33:55.771 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:33:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:33:55.772 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:33:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:33:55.772 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:33:55 compute-0 nova_compute[188703]: 2026-02-24 16:33:55.862 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:59 compute-0 nova_compute[188703]: 2026-02-24 16:33:59.477 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:33:59 compute-0 podman[204685]: time="2026-02-24T16:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:33:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:33:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3921 "" "Go-http-client/1.1"
Feb 24 16:34:00 compute-0 nova_compute[188703]: 2026-02-24 16:34:00.866 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:01 compute-0 podman[266511]: 2026-02-24 16:34:01.175716512 +0000 UTC m=+0.113752148 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:34:01 compute-0 podman[266510]: 2026-02-24 16:34:01.178894518 +0000 UTC m=+0.125125736 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:34:01 compute-0 openstack_network_exporter[207830]: ERROR   16:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:34:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:34:01 compute-0 openstack_network_exporter[207830]: ERROR   16:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:34:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:34:04 compute-0 nova_compute[188703]: 2026-02-24 16:34:04.482 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:04 compute-0 sshd-session[266549]: Invalid user pi from 185.156.73.233 port 49730
Feb 24 16:34:04 compute-0 sshd-session[266549]: Connection closed by invalid user pi 185.156.73.233 port 49730 [preauth]
Feb 24 16:34:05 compute-0 nova_compute[188703]: 2026-02-24 16:34:05.870 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:07 compute-0 podman[266551]: 2026-02-24 16:34:07.178366752 +0000 UTC m=+0.134381776 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible)
Feb 24 16:34:07 compute-0 podman[266552]: 2026-02-24 16:34:07.17943582 +0000 UTC m=+0.130278824 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 24 16:34:09 compute-0 nova_compute[188703]: 2026-02-24 16:34:09.487 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:10 compute-0 nova_compute[188703]: 2026-02-24 16:34:10.874 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:13 compute-0 podman[266592]: 2026-02-24 16:34:13.155329035 +0000 UTC m=+0.108106054 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, version=9.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, release=1770267347)
Feb 24 16:34:14 compute-0 nova_compute[188703]: 2026-02-24 16:34:14.490 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:15 compute-0 nova_compute[188703]: 2026-02-24 16:34:15.879 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:16 compute-0 podman[266612]: 2026-02-24 16:34:16.194591679 +0000 UTC m=+0.141238162 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Feb 24 16:34:16 compute-0 podman[266613]: 2026-02-24 16:34:16.20566398 +0000 UTC m=+0.145061256 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 16:34:19 compute-0 nova_compute[188703]: 2026-02-24 16:34:19.492 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:20 compute-0 nova_compute[188703]: 2026-02-24 16:34:20.883 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:23 compute-0 podman[266657]: 2026-02-24 16:34:23.097939663 +0000 UTC m=+0.060837923 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:34:24 compute-0 nova_compute[188703]: 2026-02-24 16:34:24.496 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:25 compute-0 nova_compute[188703]: 2026-02-24 16:34:25.889 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:26 compute-0 nova_compute[188703]: 2026-02-24 16:34:26.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:34:26 compute-0 nova_compute[188703]: 2026-02-24 16:34:26.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:34:29 compute-0 nova_compute[188703]: 2026-02-24 16:34:29.497 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:29 compute-0 podman[204685]: time="2026-02-24T16:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:34:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:34:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3918 "" "Go-http-client/1.1"
Feb 24 16:34:30 compute-0 nova_compute[188703]: 2026-02-24 16:34:30.892 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:31 compute-0 openstack_network_exporter[207830]: ERROR   16:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:34:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:34:31 compute-0 openstack_network_exporter[207830]: ERROR   16:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:34:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:34:31 compute-0 nova_compute[188703]: 2026-02-24 16:34:31.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:34:31 compute-0 nova_compute[188703]: 2026-02-24 16:34:31.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:34:31 compute-0 nova_compute[188703]: 2026-02-24 16:34:31.944 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:34:32 compute-0 podman[266682]: 2026-02-24 16:34:32.150122045 +0000 UTC m=+0.098839172 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 24 16:34:32 compute-0 podman[266681]: 2026-02-24 16:34:32.154029372 +0000 UTC m=+0.108123965 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 24 16:34:32 compute-0 nova_compute[188703]: 2026-02-24 16:34:32.946 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:34:32 compute-0 nova_compute[188703]: 2026-02-24 16:34:32.947 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:34:32 compute-0 nova_compute[188703]: 2026-02-24 16:34:32.947 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:34:34 compute-0 nova_compute[188703]: 2026-02-24 16:34:34.500 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:35 compute-0 nova_compute[188703]: 2026-02-24 16:34:35.898 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:36 compute-0 nova_compute[188703]: 2026-02-24 16:34:36.206 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:34:36 compute-0 nova_compute[188703]: 2026-02-24 16:34:36.207 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:34:36 compute-0 nova_compute[188703]: 2026-02-24 16:34:36.209 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:34:38 compute-0 podman[266725]: 2026-02-24 16:34:38.143848164 +0000 UTC m=+0.092395808 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Feb 24 16:34:38 compute-0 podman[266724]: 2026-02-24 16:34:38.159466807 +0000 UTC m=+0.106063148 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9)
Feb 24 16:34:39 compute-0 nova_compute[188703]: 2026-02-24 16:34:39.503 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:39 compute-0 nova_compute[188703]: 2026-02-24 16:34:39.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:34:39 compute-0 nova_compute[188703]: 2026-02-24 16:34:39.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:34:39 compute-0 nova_compute[188703]: 2026-02-24 16:34:39.981 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:34:39 compute-0 nova_compute[188703]: 2026-02-24 16:34:39.983 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:34:39 compute-0 nova_compute[188703]: 2026-02-24 16:34:39.983 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:34:39 compute-0 nova_compute[188703]: 2026-02-24 16:34:39.984 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:34:40 compute-0 nova_compute[188703]: 2026-02-24 16:34:40.488 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:34:40 compute-0 nova_compute[188703]: 2026-02-24 16:34:40.490 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5300MB free_disk=72.1563491821289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:34:40 compute-0 nova_compute[188703]: 2026-02-24 16:34:40.490 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:34:40 compute-0 nova_compute[188703]: 2026-02-24 16:34:40.490 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:34:40 compute-0 nova_compute[188703]: 2026-02-24 16:34:40.901 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:41 compute-0 nova_compute[188703]: 2026-02-24 16:34:41.433 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:34:41 compute-0 nova_compute[188703]: 2026-02-24 16:34:41.434 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:34:41 compute-0 nova_compute[188703]: 2026-02-24 16:34:41.864 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:34:41 compute-0 nova_compute[188703]: 2026-02-24 16:34:41.883 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:34:41 compute-0 nova_compute[188703]: 2026-02-24 16:34:41.886 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:34:41 compute-0 nova_compute[188703]: 2026-02-24 16:34:41.887 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:34:44 compute-0 podman[266765]: 2026-02-24 16:34:44.15375278 +0000 UTC m=+0.109685836 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vcs-type=git, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=)
Feb 24 16:34:44 compute-0 nova_compute[188703]: 2026-02-24 16:34:44.507 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:45 compute-0 nova_compute[188703]: 2026-02-24 16:34:45.905 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:47 compute-0 podman[266786]: 2026-02-24 16:34:47.135649037 +0000 UTC m=+0.093550379 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 24 16:34:47 compute-0 podman[266787]: 2026-02-24 16:34:47.196683812 +0000 UTC m=+0.148268063 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 24 16:34:49 compute-0 nova_compute[188703]: 2026-02-24 16:34:49.508 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:50 compute-0 nova_compute[188703]: 2026-02-24 16:34:50.909 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:54 compute-0 podman[266828]: 2026-02-24 16:34:54.136233237 +0000 UTC m=+0.088970434 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:34:54 compute-0 nova_compute[188703]: 2026-02-24 16:34:54.788 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:34:55.773 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:34:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:34:55.775 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:34:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:34:55.775 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:34:55 compute-0 nova_compute[188703]: 2026-02-24 16:34:55.912 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:34:59 compute-0 podman[204685]: time="2026-02-24T16:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:34:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:34:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3928 "" "Go-http-client/1.1"
Feb 24 16:34:59 compute-0 nova_compute[188703]: 2026-02-24 16:34:59.791 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:00 compute-0 nova_compute[188703]: 2026-02-24 16:35:00.916 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:01 compute-0 openstack_network_exporter[207830]: ERROR   16:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:35:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:35:01 compute-0 openstack_network_exporter[207830]: ERROR   16:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:35:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:35:03 compute-0 podman[266853]: 2026-02-24 16:35:03.147352653 +0000 UTC m=+0.098153513 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 24 16:35:03 compute-0 podman[266854]: 2026-02-24 16:35:03.159293828 +0000 UTC m=+0.104982219 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 24 16:35:04 compute-0 nova_compute[188703]: 2026-02-24 16:35:04.795 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:05 compute-0 nova_compute[188703]: 2026-02-24 16:35:05.920 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:09 compute-0 podman[266896]: 2026-02-24 16:35:09.151930395 +0000 UTC m=+0.102950814 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Feb 24 16:35:09 compute-0 podman[266895]: 2026-02-24 16:35:09.190692977 +0000 UTC m=+0.143355461 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 24 16:35:09 compute-0 nova_compute[188703]: 2026-02-24 16:35:09.798 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:10 compute-0 nova_compute[188703]: 2026-02-24 16:35:10.924 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:14 compute-0 podman[266937]: 2026-02-24 16:35:14.7795721 +0000 UTC m=+0.107540338 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 24 16:35:14 compute-0 nova_compute[188703]: 2026-02-24 16:35:14.800 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:15 compute-0 sshd-session[266958]: Connection closed by authenticating user root 172.214.45.193 port 24584 [preauth]
Feb 24 16:35:15 compute-0 nova_compute[188703]: 2026-02-24 16:35:15.928 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:18 compute-0 podman[266960]: 2026-02-24 16:35:18.156896065 +0000 UTC m=+0.107698272 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 16:35:18 compute-0 podman[266961]: 2026-02-24 16:35:18.21454511 +0000 UTC m=+0.161416350 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 24 16:35:19 compute-0 nova_compute[188703]: 2026-02-24 16:35:19.804 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:20 compute-0 nova_compute[188703]: 2026-02-24 16:35:20.931 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:24 compute-0 nova_compute[188703]: 2026-02-24 16:35:24.808 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:25 compute-0 podman[267005]: 2026-02-24 16:35:25.164967079 +0000 UTC m=+0.118075824 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:35:25 compute-0 nova_compute[188703]: 2026-02-24 16:35:25.935 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:29 compute-0 podman[204685]: time="2026-02-24T16:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:35:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:35:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Feb 24 16:35:29 compute-0 nova_compute[188703]: 2026-02-24 16:35:29.811 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:30 compute-0 nova_compute[188703]: 2026-02-24 16:35:30.886 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:30 compute-0 nova_compute[188703]: 2026-02-24 16:35:30.888 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:30 compute-0 nova_compute[188703]: 2026-02-24 16:35:30.940 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:31 compute-0 openstack_network_exporter[207830]: ERROR   16:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:35:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:35:31 compute-0 openstack_network_exporter[207830]: ERROR   16:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:35:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:35:31 compute-0 nova_compute[188703]: 2026-02-24 16:35:31.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:31 compute-0 nova_compute[188703]: 2026-02-24 16:35:31.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:35:32 compute-0 nova_compute[188703]: 2026-02-24 16:35:32.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:33 compute-0 nova_compute[188703]: 2026-02-24 16:35:33.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:33 compute-0 nova_compute[188703]: 2026-02-24 16:35:33.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:35:33 compute-0 nova_compute[188703]: 2026-02-24 16:35:33.945 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:35:33 compute-0 nova_compute[188703]: 2026-02-24 16:35:33.967 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:35:34 compute-0 podman[267028]: 2026-02-24 16:35:34.151499839 +0000 UTC m=+0.106352806 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 24 16:35:34 compute-0 podman[267029]: 2026-02-24 16:35:34.195642966 +0000 UTC m=+0.145659032 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 24 16:35:34 compute-0 nova_compute[188703]: 2026-02-24 16:35:34.815 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:35 compute-0 nova_compute[188703]: 2026-02-24 16:35:35.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:35 compute-0 nova_compute[188703]: 2026-02-24 16:35:35.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:35 compute-0 nova_compute[188703]: 2026-02-24 16:35:35.944 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:36 compute-0 nova_compute[188703]: 2026-02-24 16:35:36.938 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:39 compute-0 nova_compute[188703]: 2026-02-24 16:35:39.817 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.849 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.849 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef38f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f25ffef38c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef2900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef08f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2601b89580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef0650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f25ffef3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2601bc7980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f25ffef2d20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f25ffef08c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f260000b980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f25ffef3920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f25ffef0b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f25ffef0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f25ffef3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f25ffef23f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f25ffef3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f25ffef04a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f25ffef34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f25ffef2a50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f25ffef3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f25ffef3500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f25ffef35c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f25ffef3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f25ffef3bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f25ffef35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f25ffef3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f25ffef3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f25ffef3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f25ffef36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f25ffef3ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f25ffef3cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f25ffef3fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f25ffea41a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.863 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:39 compute-0 ceilometer_agent_compute[198475]: 2026-02-24 16:35:39.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 24 16:35:40 compute-0 podman[267071]: 2026-02-24 16:35:40.137931628 +0000 UTC m=+0.094737300 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler)
Feb 24 16:35:40 compute-0 podman[267072]: 2026-02-24 16:35:40.17672459 +0000 UTC m=+0.117829857 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 24 16:35:40 compute-0 nova_compute[188703]: 2026-02-24 16:35:40.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:40 compute-0 nova_compute[188703]: 2026-02-24 16:35:40.948 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:40 compute-0 nova_compute[188703]: 2026-02-24 16:35:40.977 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:35:40 compute-0 nova_compute[188703]: 2026-02-24 16:35:40.979 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:35:40 compute-0 nova_compute[188703]: 2026-02-24 16:35:40.979 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:35:40 compute-0 nova_compute[188703]: 2026-02-24 16:35:40.980 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.288 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.289 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5303MB free_disk=72.1563491821289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.290 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.290 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.349 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.350 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.790 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.804 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.806 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:35:41 compute-0 nova_compute[188703]: 2026-02-24 16:35:41.806 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:35:44 compute-0 nova_compute[188703]: 2026-02-24 16:35:44.819 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:45 compute-0 podman[267111]: 2026-02-24 16:35:45.151765914 +0000 UTC m=+0.111149846 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=openstack_network_exporter, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal)
Feb 24 16:35:45 compute-0 nova_compute[188703]: 2026-02-24 16:35:45.799 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:35:45 compute-0 nova_compute[188703]: 2026-02-24 16:35:45.952 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:49 compute-0 podman[267132]: 2026-02-24 16:35:49.149888789 +0000 UTC m=+0.104101754 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0)
Feb 24 16:35:49 compute-0 podman[267133]: 2026-02-24 16:35:49.181169517 +0000 UTC m=+0.130836869 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 16:35:49 compute-0 nova_compute[188703]: 2026-02-24 16:35:49.822 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:50 compute-0 nova_compute[188703]: 2026-02-24 16:35:50.956 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:54 compute-0 nova_compute[188703]: 2026-02-24 16:35:54.825 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:54 compute-0 sshd-session[267176]: Invalid user sol from 45.148.10.240 port 39500
Feb 24 16:35:55 compute-0 sshd-session[267176]: Connection closed by invalid user sol 45.148.10.240 port 39500 [preauth]
Feb 24 16:35:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:35:55.774 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:35:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:35:55.775 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:35:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:35:55.775 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:35:55 compute-0 nova_compute[188703]: 2026-02-24 16:35:55.960 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:35:56 compute-0 podman[267178]: 2026-02-24 16:35:56.142159244 +0000 UTC m=+0.095591984 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 24 16:35:59 compute-0 podman[204685]: time="2026-02-24T16:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:35:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:35:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3926 "" "Go-http-client/1.1"
Feb 24 16:35:59 compute-0 nova_compute[188703]: 2026-02-24 16:35:59.828 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:00 compute-0 nova_compute[188703]: 2026-02-24 16:36:00.964 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:01 compute-0 openstack_network_exporter[207830]: ERROR   16:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:36:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:36:01 compute-0 openstack_network_exporter[207830]: ERROR   16:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:36:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:36:04 compute-0 nova_compute[188703]: 2026-02-24 16:36:04.831 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:05 compute-0 podman[267203]: 2026-02-24 16:36:05.162765889 +0000 UTC m=+0.112877493 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 24 16:36:05 compute-0 podman[267202]: 2026-02-24 16:36:05.177576021 +0000 UTC m=+0.130514092 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:36:05 compute-0 nova_compute[188703]: 2026-02-24 16:36:05.968 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:09 compute-0 nova_compute[188703]: 2026-02-24 16:36:09.835 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:10 compute-0 nova_compute[188703]: 2026-02-24 16:36:10.973 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:11 compute-0 podman[267243]: 2026-02-24 16:36:11.13712855 +0000 UTC m=+0.083876315 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 24 16:36:11 compute-0 podman[267242]: 2026-02-24 16:36:11.14744449 +0000 UTC m=+0.108399902 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, container_name=kepler, version=9.4, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 24 16:36:14 compute-0 nova_compute[188703]: 2026-02-24 16:36:14.839 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:15 compute-0 nova_compute[188703]: 2026-02-24 16:36:15.978 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:16 compute-0 podman[267284]: 2026-02-24 16:36:16.135062914 +0000 UTC m=+0.089455338 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9/ubi-minimal, version=9.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 24 16:36:16 compute-0 sshd-session[267282]: Connection closed by authenticating user root 52.176.35.114 port 7168 [preauth]
Feb 24 16:36:19 compute-0 nova_compute[188703]: 2026-02-24 16:36:19.842 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:20 compute-0 podman[267305]: 2026-02-24 16:36:20.155876383 +0000 UTC m=+0.102312287 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 24 16:36:20 compute-0 podman[267306]: 2026-02-24 16:36:20.209742693 +0000 UTC m=+0.156834625 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 24 16:36:20 compute-0 nova_compute[188703]: 2026-02-24 16:36:20.983 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:24 compute-0 nova_compute[188703]: 2026-02-24 16:36:24.846 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:25 compute-0 nova_compute[188703]: 2026-02-24 16:36:25.988 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:27 compute-0 podman[267350]: 2026-02-24 16:36:27.096638609 +0000 UTC m=+0.059066733 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 24 16:36:29 compute-0 podman[204685]: time="2026-02-24T16:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:36:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:36:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3924 "" "Go-http-client/1.1"
Feb 24 16:36:29 compute-0 nova_compute[188703]: 2026-02-24 16:36:29.848 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:29 compute-0 nova_compute[188703]: 2026-02-24 16:36:29.941 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:29 compute-0 nova_compute[188703]: 2026-02-24 16:36:29.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:30 compute-0 nova_compute[188703]: 2026-02-24 16:36:30.992 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:31 compute-0 openstack_network_exporter[207830]: ERROR   16:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:36:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:36:31 compute-0 openstack_network_exporter[207830]: ERROR   16:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:36:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:36:32 compute-0 nova_compute[188703]: 2026-02-24 16:36:32.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:33 compute-0 nova_compute[188703]: 2026-02-24 16:36:33.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:33 compute-0 nova_compute[188703]: 2026-02-24 16:36:33.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 24 16:36:34 compute-0 nova_compute[188703]: 2026-02-24 16:36:34.853 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:35 compute-0 nova_compute[188703]: 2026-02-24 16:36:35.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:35 compute-0 nova_compute[188703]: 2026-02-24 16:36:35.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 24 16:36:35 compute-0 nova_compute[188703]: 2026-02-24 16:36:35.943 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 24 16:36:35 compute-0 nova_compute[188703]: 2026-02-24 16:36:35.972 188707 DEBUG nova.compute.manager [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 24 16:36:35 compute-0 nova_compute[188703]: 2026-02-24 16:36:35.972 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:35 compute-0 nova_compute[188703]: 2026-02-24 16:36:35.997 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:36 compute-0 podman[267373]: 2026-02-24 16:36:36.113646476 +0000 UTC m=+0.073828434 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 24 16:36:36 compute-0 podman[267374]: 2026-02-24 16:36:36.132807106 +0000 UTC m=+0.084656838 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 24 16:36:37 compute-0 nova_compute[188703]: 2026-02-24 16:36:37.943 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:37 compute-0 nova_compute[188703]: 2026-02-24 16:36:37.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:37 compute-0 nova_compute[188703]: 2026-02-24 16:36:37.944 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:39 compute-0 nova_compute[188703]: 2026-02-24 16:36:39.856 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:41 compute-0 nova_compute[188703]: 2026-02-24 16:36:41.001 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:41 compute-0 nova_compute[188703]: 2026-02-24 16:36:41.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:36:41 compute-0 nova_compute[188703]: 2026-02-24 16:36:41.981 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:36:41 compute-0 nova_compute[188703]: 2026-02-24 16:36:41.982 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:36:41 compute-0 nova_compute[188703]: 2026-02-24 16:36:41.982 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:36:41 compute-0 nova_compute[188703]: 2026-02-24 16:36:41.983 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 24 16:36:42 compute-0 podman[267417]: 2026-02-24 16:36:42.137517871 +0000 UTC m=+0.082233401 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 24 16:36:42 compute-0 podman[267416]: 2026-02-24 16:36:42.15295698 +0000 UTC m=+0.109363378 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release=1214.1726694543, release-0.7.12=)
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.321 188707 WARNING nova.virt.libvirt.driver [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.322 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5313MB free_disk=72.15636825561523GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.322 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.323 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.394 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.395 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.411 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing inventories for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.429 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating ProviderTree inventory for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.429 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Updating inventory in ProviderTree for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.444 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing aggregate associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.471 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Refreshing trait associations for resource provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4, traits: HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.502 188707 DEBUG nova.compute.provider_tree [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed in ProviderTree for provider: 3c29c547-d990-4bd5-9bfd-810bbeade4e4 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.519 188707 DEBUG nova.scheduler.client.report [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Inventory has not changed for provider 3c29c547-d990-4bd5-9bfd-810bbeade4e4 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.522 188707 DEBUG nova.compute.resource_tracker [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 24 16:36:42 compute-0 nova_compute[188703]: 2026-02-24 16:36:42.523 188707 DEBUG oslo_concurrency.lockutils [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:36:44 compute-0 nova_compute[188703]: 2026-02-24 16:36:44.939 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:46 compute-0 nova_compute[188703]: 2026-02-24 16:36:46.005 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:47 compute-0 podman[267455]: 2026-02-24 16:36:47.143221026 +0000 UTC m=+0.091234155 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.7, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 24 16:36:49 compute-0 nova_compute[188703]: 2026-02-24 16:36:49.943 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:51 compute-0 nova_compute[188703]: 2026-02-24 16:36:51.008 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:51 compute-0 podman[267474]: 2026-02-24 16:36:51.161521388 +0000 UTC m=+0.121688112 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 24 16:36:51 compute-0 podman[267475]: 2026-02-24 16:36:51.218175875 +0000 UTC m=+0.167168836 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 24 16:36:55 compute-0 nova_compute[188703]: 2026-02-24 16:36:55.030 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:36:55.776 108026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 24 16:36:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:36:55.777 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 24 16:36:55 compute-0 ovn_metadata_agent[108021]: 2026-02-24 16:36:55.777 108026 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 24 16:36:56 compute-0 nova_compute[188703]: 2026-02-24 16:36:56.011 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:36:58 compute-0 podman[267521]: 2026-02-24 16:36:58.13302487 +0000 UTC m=+0.086645622 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 24 16:36:59 compute-0 podman[204685]: time="2026-02-24T16:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:36:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:36:59 compute-0 podman[204685]: @ - - [24/Feb/2026:16:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3919 "" "Go-http-client/1.1"
Feb 24 16:36:59 compute-0 nova_compute[188703]: 2026-02-24 16:36:59.950 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:01 compute-0 nova_compute[188703]: 2026-02-24 16:37:01.017 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:01 compute-0 openstack_network_exporter[207830]: ERROR   16:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:37:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:37:01 compute-0 openstack_network_exporter[207830]: ERROR   16:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:37:01 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:37:04 compute-0 nova_compute[188703]: 2026-02-24 16:37:04.954 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:06 compute-0 nova_compute[188703]: 2026-02-24 16:37:06.023 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:07 compute-0 podman[267545]: 2026-02-24 16:37:07.12763074 +0000 UTC m=+0.084124494 container health_status 0e7f62e00e64556e0a61d708cdc931e5659acad9a785acdbe31361a36981435a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 24 16:37:07 compute-0 podman[267546]: 2026-02-24 16:37:07.127643691 +0000 UTC m=+0.080808734 container health_status e671c81117a3b08cb4f7681ab056f6d4935d6caba2271eefb8b300af7cb0c7c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 24 16:37:09 compute-0 nova_compute[188703]: 2026-02-24 16:37:09.959 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:11 compute-0 nova_compute[188703]: 2026-02-24 16:37:11.027 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:13 compute-0 podman[267588]: 2026-02-24 16:37:13.153046616 +0000 UTC m=+0.105526903 container health_status e33a428014cfd482542a60e7d96d8bd62f41cfaca52c7fee2b9185e76b969733 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:37:13 compute-0 podman[267587]: 2026-02-24 16:37:13.176689378 +0000 UTC m=+0.135214399 container health_status 106e53e96b75e03b4496e688d7fbbaf84ca9f4730e3567713bd4693692dfd561 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9)
Feb 24 16:37:14 compute-0 nova_compute[188703]: 2026-02-24 16:37:14.962 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:16 compute-0 nova_compute[188703]: 2026-02-24 16:37:16.031 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:17 compute-0 sshd-session[267625]: Accepted publickey for zuul from 192.168.122.10 port 38122 ssh2: ECDSA SHA256:GudNwvbehpz3MEDeAsvxs0QHUjZfBMLJJgJNCDXyE6I
Feb 24 16:37:17 compute-0 systemd-logind[813]: New session 34 of user zuul.
Feb 24 16:37:17 compute-0 systemd[1]: Started Session 34 of User zuul.
Feb 24 16:37:17 compute-0 sshd-session[267625]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 24 16:37:17 compute-0 podman[267627]: 2026-02-24 16:37:17.380508401 +0000 UTC m=+0.102567083 container health_status e932240cd5dbb336647321f7422cc386711e09e132fc6e9fb309874f8666fa3a (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, release=1770267347, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z)
Feb 24 16:37:17 compute-0 sudo[267650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 24 16:37:17 compute-0 sudo[267650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 24 16:37:19 compute-0 nova_compute[188703]: 2026-02-24 16:37:19.964 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:21 compute-0 nova_compute[188703]: 2026-02-24 16:37:21.035 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:21 compute-0 ovs-vsctl[267815]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 24 16:37:22 compute-0 podman[267851]: 2026-02-24 16:37:22.173468486 +0000 UTC m=+0.125794403 container health_status 2b41b52851a4e785e3b01457ff669ce50283e0a020bb915b81b53ce61eaeccda (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2e90df8e69974afa4cf6b9d4365ea825, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 24 16:37:22 compute-0 podman[267852]: 2026-02-24 16:37:22.215040324 +0000 UTC m=+0.160675900 container health_status 6f41e0eba6c4bb5d95a10785ac91bcc98865321e8f8f2c19c21aba4354f4bb52 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '00eb1f349a3c63cfd645872fd38d8f58985993c1e264f4a2dea0b2bc96b29b8f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 24 16:37:22 compute-0 virtqemud[187820]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 24 16:37:22 compute-0 virtqemud[187820]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 24 16:37:22 compute-0 virtqemud[187820]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 24 16:37:24 compute-0 crontab[268266]: (root) LIST (root)
Feb 24 16:37:24 compute-0 nova_compute[188703]: 2026-02-24 16:37:24.966 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:25 compute-0 systemd[1]: Starting Hostname Service...
Feb 24 16:37:25 compute-0 systemd[1]: Started Hostname Service.
Feb 24 16:37:26 compute-0 nova_compute[188703]: 2026-02-24 16:37:26.037 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:28 compute-0 podman[268681]: 2026-02-24 16:37:28.310733696 +0000 UTC m=+0.081296346 container health_status 4df1bf9bf00e3c486f0a0c8d3e9f3a0811ea105d58ad91ca2b5e2819ef3325d9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'b80ebd4199cef4fc1bfe62c7b99318713bb2efd03b4015965a23b1730f1b932f-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 24 16:37:29 compute-0 podman[204685]: time="2026-02-24T16:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 24 16:37:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28004 "" "Go-http-client/1.1"
Feb 24 16:37:29 compute-0 podman[204685]: @ - - [24/Feb/2026:16:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3926 "" "Go-http-client/1.1"
Feb 24 16:37:29 compute-0 nova_compute[188703]: 2026-02-24 16:37:29.969 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:30 compute-0 nova_compute[188703]: 2026-02-24 16:37:30.524 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:37:30 compute-0 nova_compute[188703]: 2026-02-24 16:37:30.942 188707 DEBUG oslo_service.periodic_task [None req-b316a39c-4cac-4054-a5b1-28f2bd38ac4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 24 16:37:31 compute-0 nova_compute[188703]: 2026-02-24 16:37:31.040 188707 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 24 16:37:31 compute-0 openstack_network_exporter[207830]: ERROR   16:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 24 16:37:31 compute-0 openstack_network_exporter[207830]: 
Feb 24 16:37:31 compute-0 openstack_network_exporter[207830]: ERROR   16:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 24 16:37:31 compute-0 openstack_network_exporter[207830]: 
